A few short years ago, I recall attending a conference where the goal was to title your presentation with the word “Docker” in as many times as possible. This was a guaranteed way to generate buzz, because containers were a hype machine. And, although everyone was very hyped about containers at the time, few knew what they actually were. Yet, for all the buzz and perceived complexity, containers were actually a fairly simple concept. They packaged all the software, executables, and peripherals needed to execute and run an application into a container that could be run on any Linux platform.
Today, great interest in containers continues, for both cloud and on-premises deployments. But, what was once a hype machine has become the real deal, and for good reason. Containers make it easy to develop, deploy, and scale new applications, due in large part to their portability, efficiency, and manageability.
The portability of containers makes them able to run on-premises, in a public cloud or private cloud, or in bare metal environments. Containers can be easily moved from one machine to another, very similar to virtual machine implementations. There are differences, however. A virtual machine bundles up not just the application software, but the entire operating system, and therefore is heavier. Since each container does not require its own copy of the operating system, they consume less system resources. At the same time, since they share an operating system, there is less isolation between containers running on the same system, so there is a chance that they could compete for lower level resources.
The major difference between the containers two years ago at my conference and what dominates talk today, however, is the enhanced orchestration. Containers on their own make deployment easy, but where they become very powerful is when you can take a few containers, easily deploy 1,000s of copies of them, and move them around. If one fails, you can spin up another. If you are overloaded and cannot meet capacity, you can spin up a dozen more. If you are over-provisioned and idle, you can shut a few of them down. Kubernetes, or other tools that leverage Kubernetes for container orchestration, can be used in this context to great effect. They let you take advantage of the basic concept of packaging, deployment flexibility, and simplicity, and then give you a way of scaling that. This combination creates a powerful dynamic that is changing the game.
Yet, this is all just beginning, because containers are still at an early point on the curve. Most adoption for containers so far is occurring in DevOps environments, where agile deployment of new applications is essential. However, even though adoption of containers for databases is still relatively new, they are still expected to deliver the same benefits to application developers as traditional approaches. This pattern will likely look familiar to anyone who witnessed the arc of virtual machine adoption in the last decade. Today, virtual machines are by far the most prevalent deployment model for data centers, while the story of containers continues to be written.
At EDB, we have embraced containers, and are now delivering EDB Postgres Containers with robust capabilities and simplified deployment. Our containers enable organizations to deploy the EDB Postgres Platform in Red Hat OpenShift, orchestrated by Kubernetes, providing a clustered solution for both high availability and read scaling. In addition, we provide containers that give users the ability to administer backups and take advantage of our tooling around monitoring and management.
The EDB Postgres solution is delivered as three containers:
High availability data management—with EDB Postgres Advanced Server and the EDB Postgres Failover Manager
This is the core container that includes our database server and our failover manager. They are configured with Kubernetes in an autopilot pattern where the container self-monitors and proper set up can be verified. Those containers can be deployed as masters with any number of replicas, so the data can be distributed across a number of different containers. Users can spin up several replicas of a master container. If the master fails, one of the replicas will take over automatically.
Read scaling—delivered by query routing and connection pooling from pgPool
The benefit of the pgPool container is read scaling, which acts as a load balancer, a router, and a workload management tool. It sits in front of the other containers, but is also independent in its own container, so it can be scaled independently of the database, permitting it to be scaled out as needed.
Disaster recovery—with the EDB Postgres Backup and Recovery Tool
EDB Postgres Backup and Recovery Tool (BART) is a container that can back up databases in multiple different containers, enabling BART to oversee multiple deployments.
These container capabilities are built into EDB Postgres today, and organizations can roll them out and take advantage of them immediately. Many of our clients are starting with simple applications with plans to move to more complex systems over time. We are working closely with our customers to share best practices for how to effectively leverage containers to successfully go from big monolithic applications to microservices, and we can help you, too.
Ken Rugg is Chief Product and Strategy Office of EnterpriseDB.