In our last post we finished up a detailed examination of different aspects of Interoperability. In this post, we will analyse the different mindsets between traditional networking domains and software development domains, explain why there is often built-in dissonance.
Whilst Open Network solutions require the integration of network and software components and practices, at the current time (and historically) these two domains are largely incompatible. Unless carefully managed this incompatibility will cause project stress and impairment.
Given that many (if not most) Open Network solutions originate in the Network Engineering department within a user organisation, this is an important consideration for the entire lifecycle of the solution; especially so if the Network Engineering team does not have established software skills and experience.
There are many aspects of the types of dissonance that can be experienced in an Open Networking project due to different paradigms or mindsets. Below we cover the top four aspects of the problem:
We described in Software Interlude Part 6 – Development Paradigms that traditional network engineering aligns more with the production model of work, i.e. that the design and production processes are largely serialised and separate.
Software development on the other hand operates on a different paradigm, in which design and production are largely intermingled: not just parallel but intertwined within the same team and the same resources.
Networks (in general) are designed using discrete components and can be designed and built along fairly pre-determined and predictable steps guided by engineering principles. Networks are highly mechanical and mathematical in nature, following a well-established set of rules. Even the software components of traditional network equipment (configuration) followed rules backed up by years of mathematical research. Network designs can be validated in advance using the same techniques.
Practically, we see the implications of this in the way network projects are executed. Formally, network projects are far more of a plan-based (aka Waterfall) lifecycle model. There are many logical reasons why the plan-based approach is better for this type of project.
Informally, we also see this: it’s typical that a senior, more experienced, person will do the network design and create a specification for how the network is to be built. This network design is typically handed off to other technical personnel for the build.
Flexibility is a key aspect of software development projects: it underpins everything that a software developer does and thinks. Networks appear to value other things: integrity, security etc. The difference comes down to the relative size of Increments, prototypes and/or MVP’s. Note: the MVP (Minimum Viable Product) is the smallest component that can be deployed to production and which enables at least 1 valuable use case.
Small increments in functionality, prototypes and MVP’s are important parts of the solution development process. These all support the agile principles if inspect and adapt.
For software, these increments can be very small and be produced very rapidly. Traditionally, in the network domain, creating a small instance of some aspect of a solution has a much higher hurdle. Model labs or test environments may exist, but these are typically insufficient for the dynamic changes required by the need to iterate; that is, if they are available at all, and/or have the right or sufficient quantities of hardware.
It is not uncommon for networks projects to be built to very general requirements and not to specific end-user use cases. The logical flow-on from this is that end-users are not actively engaged in the development lifecycle.
Software projects, and in particular Agile software projects, are built on engagement with end-users: the expectation is that end-users will interact with developers on a daily basis. This requires certain skillsets that are well-developed in software engineers (well, to varying degrees), but few Network engineers have this experience.
In general, network developers have a much higher expectation on out of the box interoperability than software developers, notwithstanding the softwareisation of the networks.
Experienced software developers typically have a high level of scepticism when it comes to claims of interoperability and will naturally plan in validation process to ensure they understand how the product will actually work. Network engineers and architects appear to be more ready accept claims of operability or standards compliance and don’t necessarily prepare for validation processes, except for first time onboarding of equipment into a network.
But given the different natures of the products, an initial validation for a software product can have a relatively short life (as new updates can break this tested functionality), whereas initial validation of a hardware product has a much longer life.
The existence of these sources of dissonance, and more, can easily lead to project impairment if not anticipated and managed carefully.
In both project planning and execution, problems arise when one party wants to invest time into something (e.g. risk reserves or validation testing) that the other party doesn’t see the need for (and consequently believes is unjustified padding of the estimates) or just doesn’t get leading to misunderstanding and miscommunication.
How do we manage this effectively? We treat everything as a software project.