The 10-year research-to-production timeline is the key lesson. Today's funding (VC or government grants) demands results in 2-3 years. We've systematically eliminated the "patient capital" that creates foundational infrastructure imho...
no, you need to explain why you thought the Soviet connection was important enough to mention. you could have said you agreed with the number or you thought it was too high for reasons or too low for reasons.
> Today's funding (VC or government grants) demands results in 2-3 years
This is nonsense. VCs have been happily investing in technology on 5 to 10 year timelines; traditional VC funds were raised with 7 to 10 year tenors.
> We've systematically eliminated the "patient capital" that creates foundational infrastructure
Did you miss all the space and fusion funding? Biotech? Flying cars? The folks on this board complaining investors have infinite timelines for results?
What’s the current state of SDN development these days?
I remember working on related projects about ten years ago in grad school, and even back then it felt like a somewhat naive and overhyped form of “engineering innovation.”
Take OpenFlow, for example — every TCP connection had to go through the controller to set up a per-connection flow match entry for the path. It always struck me as a bit absurd.
At the time, the main push came from Stanford’s “clean slate” networking project led by Prof. Nick McKeown. They spun off things like Open vSwitch, Big Switch Networks, and even open-source router efforts like NetFPGA. Later, the professor went back into industry.
Looking back, the whole movement feels like a startup-driven experiment that got heavily “packaged” but never really solved any fundamental problem. I mean, traditional distributed-routing-based network gear was already working fine — didn’t they already have admin interfaces for configuration anyway (or call that admin interface SDN )? lol ~
It's all at the big cloud service providers. Not as much focused on the physical network (as originally imagined), but in the overlay networks. Seethe various DPUs like Intel IPU, Nvidia/Mellanox Bluefield, etc. Nvidia DOCA even uses OvS as the sort of example out of the box software to implement networking on Bluefield. When your controller is Arm cores 5 cm away on the same PCB doing per connection setup is no longer as absurd ;)
To me server/networking hardware companies have a wet dream of manipulating workloads on physical servers the way one manipulates VMs in cloud computing.
Except the dream is to not do it just within a blade enclosure, but across blades in multiple racks, with network based storage in a multi-tennant environment. Maybe even across datacenters.
At some point, dealing (in an automated manner) with discovery, abstraction, and routing across different networking topologies, blade enclosures, rack switches, etc. becomes insane.
Of course a sysadmin with a few shell scripts could practically do the same for meaningful use cases without the general solution’s decade-long engineering effort and vendor lock-in…
SDN is great if you're trying to build something like a multi-tenant cloud on top of another network of machines. Your DPUs can handle all the overlay logic as if there was a top of rack switch in each chassis
A lot of mistakes were made. Almost all the code has been thrown away and all the details are different but maybe some of the ideas influenced things that exist today.
i was in close relations with telecoms during that timeframe. they went bananas with it because all of it was new for them. so they abused and misused it.
one of them for example used opendailight not for it's openflow capabilities, but via some heavily customized plugin and kind of orchestrator for automation via some crazy yang models that were sent to execution to downstream orchestrator.
but from their perspective and perspective of the management they were doing SDN
traditional network gear had "element controllers". some of the got rebranded into "SDN*something" and got interface liftups
ps. sdn/openflow like you describe were absolutely out of question for deployment in production networks. they could talk about all the benefits of it, but nobody dared to do anything with it and arguably, they had no real need
> a network should have logically centralized control, where the control software has network-wide visibility and direct control across the distributed collection of network devices.
Including a backdoor for wiretapping in SDN-enabled routers.
Is it really a “back door” when it’s controlled by the network owner? It feels like we need a different term for that since it’s increasingly common on large networks.
The question is who can send commands as network owner. The basic idea of SDR is that when A wants to talk to B, a message is sent to some control point to determine the path. The path is then sent down to the routers along the path. Packets which ordinarily would go nowhere near eavesdropping point C can be redirected to go through C, on a per A/B pair basis.
Unless the goal of the backdoor is to redirect traffic flows through packet inspection devices that the attacker also controls, the decoupling of the control and data plane in SDN deployments requires a more creative, intricate solution to allow for wiretapping compared to traditional routers.
What's your view on how these people actually impacted the adoption of SDN in general?
> The investments NSF made in SDN over the past two decades have paid huge dividends.
In my view this seems a little overblown. The general idea of separation of control and data plane is just that - an idea. In practice, none of the early firms (like Nicira) have had any significant impact on what's happening in industry. Happy to be corrected if that's not accurate!
Depends where you are in the industry - the hyperscalers specifically have budget to afford a team to write P4 or other SDN code to manage their networks in production, so they're probably the biggest beneficiaries.
Lower end, it did make programmability more accessible to more folks and enabled whitebox switches to compete against entrenched players to a far greater extent than previously possible. Again, hyperscalers are going to be the main folks who buy this kind of gear and run SONiC or similar on it, so they can own the full switch software stack.
Many of the startup companies in the SDN space did have successful exits into larger players - for example Nicira into VMWare, Barefoot (Tofino switch chip) and Ananki (the ONF 4G/5G spinoff) into Intel. Also, much of the software was developed as open source, and is still out there to be used and built on.
> 2003: The goal of the 100×100 project was to create communication architectures that could provide 100Mb/s networking for all 100 million American homes.
Well you failed horribly.
> The project brought together researchers from Carnegie Mellon, Stanford, Berkeley, and AT&T.
I think I see why.
> This research led to the 4D architecture for logically centralized network control of a distributed data plane
What? How was this meant to benefit citizens?
> Datacenter owners grew frustrated with the cost and complexity of the commercially available networking equipment; a typical datacenter switch cost more than $20,000 and a hyperscaler needed about 10,000 switches per site. They decided they could build their own switch box for about $2,000 using off-the-shelf switching chips from companies such as Broadcom and Marvell
What role did the NSF play here? It sounds like basic economics did most of the actual work.
> The start-up company Nicira, which emerged from the NSF-funded Ethane project, developed the Network Virtualization Platform (NVP)26 to meet this need
Which seems to have _zero_ mentions outside of academic papers.
The 10-year research-to-production timeline is the key lesson. Today's funding (VC or government grants) demands results in 2-3 years. We've systematically eliminated the "patient capital" that creates foundational infrastructure imho...
To say nothing of systematically eliminating the foundational infrastructure for nationally funded science in general.
In other words China's success is in part similar what used to make the US successful. Any lessons to be taken from that? No.
The Chinese follow a five year cycle: https://en.wikipedia.org/wiki/Five-year_plans_of_China
The pattern is adopted from the Soviet Union. Take whatever lessons from that you will.
Oh, so onimous. Your claim is that the number 5 is cursed because it is a communist number?
There is no lesson to take from the number 5. There are lessons to take from longer term planning.
> Your claim is that the number 5 is cursed because it is a communist number?
That’s your own nonsensical strawman, not mine. Who in their sane mind would stoop to such silly numerology? That sounds positively medieval.
What were you even thinking when you wrote your reply? You’re going to really have to unpack your thought process for me to understand what you said.
no, you need to explain why you thought the Soviet connection was important enough to mention. you could have said you agreed with the number or you thought it was too high for reasons or too low for reasons.
https://en.wikipedia.org/wiki/Five-year_plans_of_the_Soviet_...
The number is not what is relevant in his comment.
> Today's funding (VC or government grants) demands results in 2-3 years
This is nonsense. VCs have been happily investing in technology on 5 to 10 year timelines; traditional VC funds were raised with 7 to 10 year tenors.
> We've systematically eliminated the "patient capital" that creates foundational infrastructure
Did you miss all the space and fusion funding? Biotech? Flying cars? The folks on this board complaining investors have infinite timelines for results?
> The 10-year research-to-production timeline is the key lesson. Today's funding (VC or government grants) demands results in 2-3 years.
Don't forget all the cries of "governments can't do anything, only free market commercial entities can innovate!"
[dead]
What’s the current state of SDN development these days?
I remember working on related projects about ten years ago in grad school, and even back then it felt like a somewhat naive and overhyped form of “engineering innovation.”
Take OpenFlow, for example — every TCP connection had to go through the controller to set up a per-connection flow match entry for the path. It always struck me as a bit absurd.
At the time, the main push came from Stanford’s “clean slate” networking project led by Prof. Nick McKeown. They spun off things like Open vSwitch, Big Switch Networks, and even open-source router efforts like NetFPGA. Later, the professor went back into industry.
Looking back, the whole movement feels like a startup-driven experiment that got heavily “packaged” but never really solved any fundamental problem. I mean, traditional distributed-routing-based network gear was already working fine — didn’t they already have admin interfaces for configuration anyway (or call that admin interface SDN )? lol ~
It's all at the big cloud service providers. Not as much focused on the physical network (as originally imagined), but in the overlay networks. Seethe various DPUs like Intel IPU, Nvidia/Mellanox Bluefield, etc. Nvidia DOCA even uses OvS as the sort of example out of the box software to implement networking on Bluefield. When your controller is Arm cores 5 cm away on the same PCB doing per connection setup is no longer as absurd ;)
To me server/networking hardware companies have a wet dream of manipulating workloads on physical servers the way one manipulates VMs in cloud computing.
Except the dream is to not do it just within a blade enclosure, but across blades in multiple racks, with network based storage in a multi-tennant environment. Maybe even across datacenters.
At some point, dealing (in an automated manner) with discovery, abstraction, and routing across different networking topologies, blade enclosures, rack switches, etc. becomes insane.
Of course a sysadmin with a few shell scripts could practically do the same for meaningful use cases without the general solution’s decade-long engineering effort and vendor lock-in…
SDN is great if you're trying to build something like a multi-tenant cloud on top of another network of machines. Your DPUs can handle all the overlay logic as if there was a top of rack switch in each chassis
A lot of mistakes were made. Almost all the code has been thrown away and all the details are different but maybe some of the ideas influenced things that exist today.
i was in close relations with telecoms during that timeframe. they went bananas with it because all of it was new for them. so they abused and misused it.
one of them for example used opendailight not for it's openflow capabilities, but via some heavily customized plugin and kind of orchestrator for automation via some crazy yang models that were sent to execution to downstream orchestrator.
but from their perspective and perspective of the management they were doing SDN
traditional network gear had "element controllers". some of the got rebranded into "SDN*something" and got interface liftups
ps. sdn/openflow like you describe were absolutely out of question for deployment in production networks. they could talk about all the benefits of it, but nobody dared to do anything with it and arguably, they had no real need
> a network should have logically centralized control, where the control software has network-wide visibility and direct control across the distributed collection of network devices.
Including a backdoor for wiretapping in SDN-enabled routers.
Is it really a “back door” when it’s controlled by the network owner? It feels like we need a different term for that since it’s increasingly common on large networks.
The question is who can send commands as network owner. The basic idea of SDR is that when A wants to talk to B, a message is sent to some control point to determine the path. The path is then sent down to the routers along the path. Packets which ordinarily would go nowhere near eavesdropping point C can be redirected to go through C, on a per A/B pair basis.
Unless the goal of the backdoor is to redirect traffic flows through packet inspection devices that the attacker also controls, the decoupling of the control and data plane in SDN deployments requires a more creative, intricate solution to allow for wiretapping compared to traditional routers.
what a wonderful chronicle of how esoteric research became not-esoteric, and truly world-changing, and how the NSF enabled it
pour one out for the NSF folks. RIP </3
I worked with a quite few of the folks mentioned in this article when I was at the Open Networking Foundation, if anyone has questions.
What's your view on how these people actually impacted the adoption of SDN in general?
> The investments NSF made in SDN over the past two decades have paid huge dividends.
In my view this seems a little overblown. The general idea of separation of control and data plane is just that - an idea. In practice, none of the early firms (like Nicira) have had any significant impact on what's happening in industry. Happy to be corrected if that's not accurate!
Depends where you are in the industry - the hyperscalers specifically have budget to afford a team to write P4 or other SDN code to manage their networks in production, so they're probably the biggest beneficiaries.
Lower end, it did make programmability more accessible to more folks and enabled whitebox switches to compete against entrenched players to a far greater extent than previously possible. Again, hyperscalers are going to be the main folks who buy this kind of gear and run SONiC or similar on it, so they can own the full switch software stack.
Many of the startup companies in the SDN space did have successful exits into larger players - for example Nicira into VMWare, Barefoot (Tofino switch chip) and Ananki (the ONF 4G/5G spinoff) into Intel. Also, much of the software was developed as open source, and is still out there to be used and built on.
> 2003: The goal of the 100×100 project was to create communication architectures that could provide 100Mb/s networking for all 100 million American homes.
Well you failed horribly.
> The project brought together researchers from Carnegie Mellon, Stanford, Berkeley, and AT&T.
I think I see why.
> This research led to the 4D architecture for logically centralized network control of a distributed data plane
What? How was this meant to benefit citizens?
> Datacenter owners grew frustrated with the cost and complexity of the commercially available networking equipment; a typical datacenter switch cost more than $20,000 and a hyperscaler needed about 10,000 switches per site. They decided they could build their own switch box for about $2,000 using off-the-shelf switching chips from companies such as Broadcom and Marvell
What role did the NSF play here? It sounds like basic economics did most of the actual work.
> The start-up company Nicira, which emerged from the NSF-funded Ethane project, developed the Network Virtualization Platform (NVP)26 to meet this need
Which seems to have _zero_ mentions outside of academic papers.
Nicira NVP is now VMware NSX which is pretty successful. AWS/GCP/Azure VPC are also probably inspired by Nicira.
>Which seems to have _zero_ mentions outside of academic papers.
Nicira or NVP?