Natalie Godec, Platform Engineer at Babylon Health

In the latest instalment of our #DevOpsQ&A series, we caught up with Natalie Godec, Platform Engineer at Babylon Health, to talk how she began her career in the industry, and the key skills needed to succeed in #DevOps.

Natalie, also talks about the current trends in tech, as well as the benefits and challenges of using both AWS and GCP.

Third Republic (TR): Thanks for joining me today, for our WomenInTechQA series, just to kick off, could you tell us a bit about yourself and how you got into a career in tech?

Natalie Godec (NG): I started when I was about 14 years old, and my parents asked who I wanted to be. Since my mom was in tech, she was a systems engineer, I thought she might help me if I go into the same industry. So I decided on tech and then she brought me to her workplace, to her office, and piled a bunch of circuits and computer details infront of me, and said, ‘Build a computer’. She helped me obviously, and I built it, and it didn’t work. It turned out, it wasn’t my fault, the memory card wasn’t functional. But that’s how it started. 

For the longest time, I wanted to be a network engineer, and I studied that for a long time. Then very randomly, I got into DevOps. The story is; I needed a project in the industry to do my master thesis, and the deadlines for the application were coming up, and I started applying to everything that was in the list of available projects, systems wise, because everything out there is for software developers, so systems is a bit more difficult to come by. I applied and I got an interview, it was in Switzerland, in Geneva, with a French speaking bank and my French was okay-ish. Then they were throwing all of these words during the interview, like, ‘You’re gonna do puppet and configuration management and we do a bit of OpenStack’, and I was like, ‘Sure, exciting’. I had no idea what any of that meant, but then I spoke to my best friend who had already worked in the industry, and he was like, ‘Oh my god, that’s so amazing, you should definitely do that’, and so I did. And I loved it. And I still do it.

TR: As a platform engineer, could you talk about the journey that you’ve been on from the moment you took that job to where you are today?

NG: I did a year and a half graduate programme in Switzerland, in a private bank, and there I changed teams every four months, doing all sorts of different stuff that is related to systems and automation and configuration management. Then I moved to London, and started working for Morgan Stanley, in their core infrastructure team as a Systems Engineer. There it was also about on-prem infrastructure and managing, I think, 70,000 servers with a configuration management tool. Building automation and building new environments, there was a lot of push on building more secure environments for more sensitive data. We were doing that and trying to bring in a little bit of Docker, bringing a little bit of Kubernetes if possible, because it’s a huge traditional bank, and so everything is kind of established. You need to be very careful about what you bring in and how you actually introduce it into the existing infrastructure.

I was there for two years and then I thought that I needed a different challenge. I wanted to do a little bit more of open source, a little bit more around the tools that are standard in the industry, because that’s where I started in this graduate programme in Switzerland. We were using LogStash and kibana grafana and puppet, which was back in 2015, the industry standard for configuration management. I wanted to go back to that open platform, go out and pick what the industry in the whole world is working on and try to implement it and build systems with that. So I joined Babylon Health as a platform engineer. I had never worked with Cloud in production. I had touched a little bit like we all do, a course there, a little lesson there, but I’ve never actually worked with it on a day-to-day basis. At Babylon Health, everything is in Cloud. So that was a learning journey, and it’s very interesting, we are building very exciting things. Every day is a challenge.

TR: How did you find that switch from going straight into Cloud, was it quite a steep learning journey?

NG: In terms of the learning journey, I’m still learning, there’s always new things to learn. Bits of infrastructure that I haven’t touched before. That’s always like that in our job. But at the end of the day, if you’re working with automation tools and configuration management, it’s just a different type of yaml file, pretty much DevOps engineering is yaml engineering. There are similarities in terms of how you manage infra, and how you try to have your platform consistent, and have everything tested and have different environments. The principles of working and methods of working, and how you try to design your system, stay the same.

One thing that was very difficult and very different, was changing the pace, because when you work for a huge organisation, and also when you work with on-prem infrastructure, things are slower than when you were in a startup with Cloud. I had gotten used to the pace of things that I had during two years at Morgan Stanley. Neither of those approaches is bad, or better, or worse than the other. It’s just different. So there I am joining the team at Babylon, where everyone is absolutely brilliant, and everyone is working so fast. It took me, I think, two months to get used to the pace of Cloud, of startup, of just ‘go and do it’.

TR: What would you say are some of the key skills that are needed for success in DevOps and Cloud infrastructure?

NG: You need to know how to Google. It’s a very creative industry, for me, being a DevOps engineer, because you’re faced with all sorts of problems every day, and it’s your job to find the solution to that problem. Especially if you work with open source tools, you have this whole world; CNCF, has this diagram, ‘the periodic table of open source tools’, and it’s massive, and it’s getting bigger and bigger every year. It’s quite an interesting place to look at, if you’re just starting out. 

You have all sorts of tools at your disposal, and you can also modify them to your needs. Or you can write your own scripts, because a lot of our job is automation. You do need to know how to write scripts in Bash or Python, regardless of what kind of infrastructure you work with. 

I would say knowing how to Google, knowing how to script, to simply automate, like, you need to deploy a change, run a command with 15,000 servers or delete 25 buckets from AWS. It’s all scriptable. It’s just the for loop. Nothing crazy, nothing like software development. But you do need to know how to script, and problem solve. Creative approach to problem solving, knowing where to look, how to look, how to approach a problem from all different angles, is very important.

TR: You’ve worked extensively with on-prem, and now you specialise in Cloud infrastructure, could you talk about the main differences that you’ve noticed?

NG: It’s often a very different skill set, technically, to work with on-prem and Cloud. For instance, when I worked with on-prem a lot of the skills that I had, and practised every day, were Linux, command line, Unix principles and debugging and administering Linux boxes. Now I think I haven’t debugged a Linux box in probably a year, maybe I restarted a Jenkins box once, but that’s it. 

With Cloud, you have loads of services that are available in a managed way, where you just order the service to work. You don’t have to think about the deepest ends of the infrastructure. That’s the main difference, when you work with on-prem. If you want to deploy, say, Kubernetes, you will have to know every single detail about how Kubernetes infrastructure works, how it’s installed, how different services interact with each other, how networking works, and even think about how you will deploy the network. You typically have a network team who deal with the switches and routers and all of the physical infrastructure of the network, and you have to sit down with them and talk about subnets and IP ranges and which kind of router will be on top of which rack, in order to provide this kind of functionality, or that kind of functionality that you will want to have in Kubernetes.

Whereas, when you’re in the Cloud, the likelihood of you being able to just say, ‘Oh, there is EKS, it’s a managed Kubernetes cluster, I’ll just do that,’ and everything is set up for you. Or equally GCP. For people that are into data analysis, airflow is a very popular open source tool, for data pipelines, and it’s, I think, quite painful to manage. I’ve never managed it myself on-prem. But I’ve heard and hence why we use Cloud Composer on Google Cloud, because it’s a managed service, and we just deploy it and everything is there.

To watch the rest of the interview, and to find out the current trends in tech that Natalie is excited about, click here.

If you’d like to be involved in our next Q&A, get in touch with us today!

0 Shares: