Pulumi We Go. Pt 1

Pulumi We Go. Pt 1

Down the Rabbit Hole.

·

5 min read

In the first entry of the "Pulumi We Go" series, we looked at my initial impressions of using Pulumi to redeploy my AWS cloud lab and discussed some goals to improve my proficiency with Pulumi. In this article, we will walk through the deployment of the core network of this lab. As always, you can find the code snippets in the embed link above, or you can visit my GitHub page to review them yourself.

Defining the Network

As outlined in the previous article, we have established a non-default Virtual Private Cloud (VPC) that encompasses two distinct subnets. Additionally, we have constructed a route table and effectively linked it to both subnets through a route table association. The rationale behind this specific network configuration is to provide a structured and isolated environment within our AWS cloud infrastructure. This setup allows for enhanced control and segmentation of network traffic, which is crucial for maintaining security and efficient data flow between different sections of our cloud system.

In this VPC, each subnet serves a unique purpose, with one designated for front-end access and the other for back-end services. This separation ensures that the public-facing services are isolated from the core back-end processes, thereby reducing the potential attack surface. The route table plays a pivotal role by directing traffic between these subnets according to predefined rules, ensuring that the communication is seamless and secure. Furthermore, the association of the route table with both subnets simplifies the management of network routes. It allows for centralized changes and updates, which are automatically applied across the network, enhancing the consistency and reliability of network operations. This setup not only optimizes network performance but also aligns with best practices for network design in cloud environments.

Defining Ingress and Egress Rules

Following that, we implemented a network access control list (ACL) and associated it with the public subnet. This ACL is configured to block all incoming and outgoing traffic, effectively establishing the foundation of our network infrastructure. Additionally, we have created an internet gateway and have created a route that will be used for all egress traffic that needs to be sent out to the internet. You might question the rationale behind explicitly setting deny rules when AWS, by default, provides a certain level of security measures that include similar restrictions. This approach might appear redundant at first glance. However, my decision to explicitly define these deny rules is for a variety of factors. Firstly, these explicit rules allow for granular control over the network traffic, which is essential for precise management and customization of network flows. This level of detail is particularly important in environments requiring stringent security measures.Secondly, adhering to compliance standards such as PCI DSS necessitates specific controls and audit capabilities that might not be fully addressed by the default settings. By setting our own rules, we ensure that our network aligns with these compliance requirements, providing an additional layer of security and meeting the necessary regulatory standards. Lastly, having these explicit deny rules enhances our network monitoring capabilities. By capturing and logging the traffic that is explicitly denied, we gain valuable insights into potential security threats or unusual network patterns. This information is crucial for effective troubleshooting and can be instrumental during a security incident, helping us to quickly identify and respond to potential threats.

Logic Flows

If you take a moment to examine slides 2 and 3 from the provided code snippets, you'll notice one aspect that I find particularly enjoyable about using Pulumi with Go: the intuitive and seamless way it handles loops and conditions during deployment processes. Specifically, in slide 3, we leverage Go's native for loop functionality to deploy rules for both inbound and outbound access control lists. This integration of looping constructs within the deployment logic illustrates the power and flexibility of Go that I never really understood until I sat down and gave it a real shot.

Moving on to slide 7, this advantage becomes even clearer. Here, we can easily integrate conditional statements within a for loop, which simplifies the process of defining dynamic parameters. This method of managing logical constructs and conditions in deployment scripts is not only efficient but also greatly reduces complexity. This ease of integrating advanced logical constructs contrasts sharply with my experiences using Terraform, where implementing complex logic often feels like an uphill battle. In Terraform, the same operations can seem cumbersome and less intuitive, making the process feel more forced and less natural.

Based Network

So, that's about it. Our cloud lab network is now set up. We have a new VPC, subnets, routes, internet gateways, Network Access Control Lists with custom rules, and security groups with custom rules. This forms the foundation of our initial network design. It's important to note that for me, the networking part of working in the cloud is the most challenging to gain practical experience with. This is due to the costs of running these components and the potential for mistakes. Therefore, if you plan to use any of the code provided in this article, please review the cost calculator to get an idea of the expenses. Even I had already dismantled all this infrastructure before I started writing this document (Ha).

Up Next

In the next article, we will be looking at deploying an EC2 instance with a custom-built Amazon Machine Image (AMI) that is prepackaged with Nginx. We will either take a look at deploying via an automation system like HashiCorp Packer, or maybe something similar.

Thanks for reading, catch you all in the next.

Did you find this article valuable?

Support The0x__ by becoming a sponsor. Any amount is appreciated!