A Look Behind the Curtain: The Infrastructure Behind PreSales’ Newest Solution, Eval
Last week, Vivun launched Eval to bring transparency and trust to B2B sales. As one of the engineers who sprinted to launch our second product for PreSales, I wanted to share a few of the technical decisions we’ve made behind the scenes that enable Vivun’s engineers to move fast, build quality software, and empower our customers.
If you’re in PreSales, you know that prospects want working products in their hands before they buy, and they want it yesterday. For those of us in software engineering, internal stakeholders can be the exact same way. Let’s dive into how we drive engineering velocity at Vivun to meet momentous goals in record-breaking time while still holding reliability and security to the highest standards.
Moving fast without breaking things
Every software engineer has stories from past lives of late-night troubleshooting sessions, weekends missed with family members, and generous usage of fire-related emojis. When we set out to build our second product, Eval, the team was determined to avoid those experiences as much as possible.
Here at Vivun, we fully embrace the DevSecOps mindset to build systems and procedures that make it convenient and practical for our developers to fully assume responsibility for their applications. To that end, we made a couple of key choices around our infrastructure and deployments:
- Running all of our infrastructure in containers and embracing managed services when it made sense to do so
- Leveraging infrastructure-as-code for consistency across multiple environments
- Automating our testing and deployment pipeline for CI/CD
Together, these elements make it dramatically easier for Vivun engineers to build and own their applications quickly, without compromising on security or reliability.
Our infrastructure: serverless containers
We leverage Kubernetes (specifically AWS EKS) to orchestrate our container ecosystem at Vivun. In total we have three clusters, each in its own dedicated AWS account and VPC to provide the best possible isolation between environments. Our development environment is ever changing and allows developers to quickly deploy, test, collaborate, and obtain validation for features and functionality they are working on.
This allows rapid development and visibility while keeping code quality up, and setting a high bar before promoting. We even have plans to dynamically create and tear down ephemeral environments for features as they are engineered—further increasing development velocity by allowing each team to have their own short-lived virtual sandbox. The staging environment provides a more rigid area where we can run more in-depth tests, conduct demos, and validate that a release is truly ready, and then of course there’s our production environment, where we are able to expose and share our hard work with our customers.
All of this runs on zero Vivun-managed servers, which keeps our operational burden low and allows us to leverage the security and operational maturity of AWS managed services to their fullest. You heard that right—no VM management, no Operating System (OS) patching, no snapshotting, and no managing Secure Socket Shell (SSH) keys or other forms of server access needed—so many nights and weekends saved!
Thanks to AWS Fargate-backed Elastic Container Service for Kubernetes (EKS) nodes, we no longer have to worry about any of the traditional headaches that come with server or VM management. This also allows us to put security at the forefront—we’ve locked down our control plane, assigned AWS access via IAM Roles for Service Accounts instead of individual user management or access keys, and created a strict separation of data, applications, and user traffic in a Zero Trust approach.
Using Infrastructure-as-Code (IAC) to efficiently manage multiple environments
Managing infrastructure is always complex, but luckily Hashicorp’s Terraform removes much of that burden for us. Terraform allows us to write centralized infrastructure as code that is source-controlled, governed by code review and testing, and easily replicated across many different environments. This creates consistent, version-controlled infrastructure in all of our environments, and allows us to manage the AWS accounts separately while still keeping them in sync.
Within Kubernetes, we also use Terraform to deploy and manage Helm charts to capture the state of many of our utility tools within IAC. Having Terraform providers for everything from cloud services to Helm chart management means that every aspect of our infrastructure can be peer-reviewed and have a deployment model written around itself, just as any development code would.
Automating our testing and deployment pipeline
Finally, we’ve automated much of our testing and deployment pipeline with Harness. Harness allows us to streamline automated deployments of new service images to a lower environment where testing is automatically run to validate each code merge.
Our QA team can monitor the results of a battery of tests performed on each deployment, determine which failures prevent promotion, and even have tests that will trigger a rollback and kick out the bad code. You can’t have fellow developers breaking dev for everyone, especially when you’re on tight timelines.
Once things are ready to roll, all it takes is the push of a button from an authorized user to promote the static image bundle up environments, allowing us to quickly and cleanly roll out fixes, updates, and new features to our clients.
Come Build with Us
We’ve got all these amazing things in our toolbox, but a tool is only as good as the person who wields it—and our team is top-notch! If this type of work sounds interesting to you, head over to our careers page and see if something is open! Nothing available? Hit us up anyways. We always love to talk about the cool innovation we’re bringing to PreSales and B2B technology!