CPQ Cloud

The History and Progression of CPQ Cloud (and All Cloud) Computing

Table of Contents

    Share this article

    Get A Demo

    “Stateless, stateful, multi-tenant, single-tenant, virtualization, micro-services, orchestrators, virtualization…”

    These, and many other similar terms, are a part of my vocabulary as a KBMax architect. You would be surprised that nowadays they’re a part of everyone’s life, even if it isn’t obvious. In today’s world everyone is using a cloud system in some form, but may or may not be aware of the implications of the service being used daily, such as cost, security, availability, and privacy.

    “What is Cloud Architecture, and Why Should I Care?”

    If you are selecting CPQ software (or any software) for your company, you may be interested in preventing wasted resources on future hidden costs, by simply investing in (or buying) future-proof software. I’ve seen many companies who were convinced they were getting a cloud application, but later realized that they had purchased ‘virtualized software’ and paid a very high price to end up getting stuck with the same version of the software forever, racking up hidden costs.

    Since I’ve been programming for 40 years, I wanted to tell you the history of cloud architecture from my point of view. You can Google each of the terms above, but I think that you can better understand the ‘why’ behind some of the architectural choices after learning the history behind them. Like in any other fields, ‘discovers’ are the answer to current problems and needs.

    A Brief History of Distributed Computing

    Mainframes

    I will skip the mainframe because it is too deep in the ancient history of computing.

    Servers and Connected Desktops

    Let’s instead start with the beginning of this century, where desktop computers were common and servers were around, but only for a few tasks like file sharing, printing, user authentication, and databases. The data was physically ‘owned’ and the software and hardware were sold together. During this time for users, the software was mainly a local application with instances installed on a desktop computer. Security was not a real problem, even though everybody was an ‘administrator’, and some users even had a ‘post it’ with their password affixed to the monitor. The main issue was the versioning of the software, the maintenance of the hardware, the backup, and the underutilization of the servers. 

    I remember that the ‘server’ was the most expensive part of ‘the deal’, and the most underutilized: It was not rare to log in into a server and see that the most demanding CPU application was the spinning 3D text of the windows screen saver (I never did understand why it was the most common one). Updating an application was a real pain and required several technicians to deal with the different problems that popped up. So, naturally, maintenance only happened when it was required or inevitable. 

    Web-Based Computing

    Then the web started to enter into the business meaningfully, as having a website quickly became mandatory. That underutilized server started to be used by the most forward-thinking and brave companies. Organizations did not know that they were being ‘brave’, but they really were, considering the huge security problems they started encountering. IT departments started to grow: servers, dedicated internet connections, network peripherals, rack, cables, UPSs, etc…

    Server Virtualization

    It was time for a new kid on the block: Virtualization. “Instead of having many underutilized servers, what if we created a virtual copy of every server and then put them all into one physical server?” The idea was great, and honestly, it is still a great idea today.

    But from the point of view of desktop software, everything was exactly the same until we started to see web applications come into actual business use. A web application is typically split into 2 parts: The UI (user interface) built in HTML, CSS, and Javascript that executes logic in the client, and the remaining part that’s executed on the server side.

    Here we see the emergence of a new architectural term: Stateful. Stateful means that the server is fully aware of the client and the context of each transaction. Each transaction is performed in the context of previous transactions, and the current transaction may be affected by what happened during previous transactions. For these reasons, stateful apps use the same servers each time they process a request from a user. Here’s the main issue: since each server has a limited number of clients it can serve, scaling is not easy. You can’t just add a new server. Instead, you need to create a logic such as, “Customers with a name starting with A to G go to server 1, from G to O go to server 2…”, and so on. 

    But ‘stateful’ also means that the context is stored in the server. Sometimes the server also contains the database, or it hosts different virtual servers on the same hardware.

    As you can imagine, if the server drops for any reason…everything is lost!

    The solution to this challenge presents the opposite term: Stateless. A stateless server does not store the context of the client, but it only executes the request. The client jumps between servers and a ‘load balancer’ which dynamically assigns clients to each server, distributing the load. If a server goes down, no problem! The ‘load balancer’ stops to route the traffic to the offline server. If the load increases beyond the capacity of the server, it’s just a matter of adding additional stateless servers. Obviously, the opposite is also true: If the number of clients decreases, servers can be easily turned off. This is also called an ‘elastic pool’.

    From a developer’s point of view, this historical moment represented a huge shift. We had to move away from the desktop, where everything was a single application where all the resources and the components of the software were compressed. All the skills and the problems were in one ‘singularity’. This introduced a new set of problems: servers, connections, frontend logic, backend logic, new languages, and data abstraction. Many tried to adapt their knowledge to this new paradigm, instead of starting from scratch and specializing in certain areas (people are still stuck today in the singularity of the desktop application). You still come across software regularly that you can clearly tell was a “port” of a desktop application to a cloud infrastructure, especially when you are forced to deal with ‘installations’, ‘files’, and ‘versions.’ 

    Cloud Applications

    We’re now in 2011: Occupy Wall Street is in full swing, and ‘cloud applications’ are finally being developed for a ‘cloud operating system’. It’s important to note that the execution of the software was still happening in virtual machines. The software architecture was ‘monolithic’ meaning that each virtual stateless server had a copy of the software. The max optimization had a web component and ‘worker’ components. The ‘worker’ was the ‘execution’ piece like creating documents, compressing files, compute algorithms, and long-running tasks.

    This architecture was not really efficient from a CPU utilization standpoint and resulted in overloaded or underused servers. It was also not efficient for developers to deal with this monolithic architecture, because in order to update a small part of the software you had to release the application on all the servers, resulting in downtime. Monolithic architecture is easier to develop, test, and deploy…but hard to scale.

    Here again we’re presented with another challenge that pushed the cloud forward: “How can we break down a monolithic application into smaller pieces?”

    Microservices, Orchestrators, and Containers, Oh My!

    The answer to monolithic cloud applications is ‘orchestration with microservices’. Let’s imagine splitting an application into self-contained pieces of business functionality, following the ‘UNIX philosophy: “Do one thing and do it well”. Once you have your application split into these pieces, you can call them ‘microservices’. Imagine all of these microservices as pieces in a Tetris game, fitting together all the virtual servers in a way to better utilize the CPU, memory, network, and storage resources.

    The ‘Orchestrator’ is really the one ‘playing Tetris’, and the VMs are called ‘Nodes’ (or the levels/boards in our Tetris game). If a node goes down, the orchestrator can create new nodes or move the services to one or more nodes. If you update a microservice, the Orchestrator can keep the old version alive, deploy the new version and then stop the old version. 

    As you can imagine the orchestrator with microservices is a very flexible and robust architecture, but it has a problem…someone has to manage and maintain the Orchestrator!

    This drove us to reimagine the ‘virtual machine’. A virtual machine is a logical server containing the entire software stack: Drivers, Operating System, and Application. A physical machine can host multiple VMs, even for different customers because they’re completely isolated from each other. This approach is very powerful, but it isn’t very efficient because the OS uses a lot of resources only to ‘exist’, and every VM requires maintenance like upgrades, security patches, and configurations.

    A Container isolates the application, and shares the OS between all the other containers. Instead of virtualizing the hardware, like a VM in a container, the OS is virtualized.

    Ok, back to the architecture. You might have already guessed that the best candidate to sit inside a container is a microservice. Having microservices totally separated, you can host ‘contained microservices’ of different instances within the same Orchestrator.

    Serverless

    We are at the final step of our history lesson, so let’s look at the last term: Serverless. If the orchestrator is managed by someone else, like a cloud provider, developing an application like KBMax is just a matter of developing and deploying containerized microservices. The microservices can be scaled, moved, restarted, and upgraded automatically, without wasting company resources. But how is the application served to each customer?

    The first option, Single-Tenancy, is the most obvious for an ‘on premise’ option since it involves a dedicated application and a dedicated storage and database. It is a cloud application with one instance of everything dedicated to each customer. This approach has pros and cons: Each customer can have a different upgrade path, backup, and control. However, it quickly becomes a drain on resources since the client is given a false impression that they can control the release and the security.

    The opposite option is called Multi-tenancy: There is only one application and one storage/database. All customers use the same application and their data is maintained side by side on the same storage/DB. This option is very efficient and the software releases are generally more frequent and less intrusive. But there is a catch. If you’re a company particularly attentive to security, you probably don’t want your data stored alongside other companies’ data. KBMax solves this problem with Hybrid Tenancy: One application, but a dedicated storage/DB for each customer.

    Choosing a Real CPQ Cloud

    Hopefully this deep dive through the history of cloud computing helps you understand a little more about the ins and outs cloud architecture. We love to share this knowledge with other companies, so they can learn how to spot applications that aren’t following the most recent trends to optimize speed, security, and access.

    KBMax is built using the latest cloud architectures following development best practices, ensuring top performance and security for our CPQ cloud customers. We often come across customers who were sold a ‘fake cloud’ by another CPQ vendor, only to realize the grift once it was too late. Look for a future article, where we’ll discuss the differences between types of cloud infrastructures (SaaS, IaaS, PaaS, etc.) and how they can fundamentally change the customer experience and total cost of ownership. Because, yeah, not all ‘clouds’ are the same.

    Luigi Ottoboni

    Luigi Ottoboni

    Luigi wrote his first software when he was eight years old. He started his first company, the first Internet Service Provider in the area, while he was still in university. He graduated with a degree in Mechanical Engineering and proceeded to found six other tech companies. As a KBMax Co-Founder, Luigi leads our Research and Development because he loves finding new technologies to keep KBMax 'on the cutting edge'. He's proud to say that he has seen more of the USA and Greece than an average American or Greek resident.

    en_USEnglish