Approximately two years ago we took a look at the technology and application stack we had developed and realised that, despite building out a platform that had been extremely successful for many years, it was time to embark on the next phase of an already successful journey.
Like a lot of companies starting out today we wanted to embrace more Opensource technologies that are available and being adopted. We already had a small and limited Linux estate but as a traditional Windows/.NET shop for the majority of our applications it wasn’t going to be an easy challenge to port these over. One of the key technologies that came along and helped to aide this journey was the release of .NET Core. This would enable us to realize our vision to port successful applications to Linux without first having to perform expensive rewrites. In parallel we began work on our strategic plan to break up our traditional application stacks allowing us to deliver our next generation of scalable, microservices deployed on distributed orchestration platforms.
To make our ambitious plans a reality, we needed to bring the highest calibre of talent into the teams, especially as one of the outcomes from adopting an Opensource technology strategy would lead to an enlarged Linux estate. With a small Linux team in place who were responsible for running the Market Data infrastructure, our head of Technology, Andrew Mountford, recognised the need to build out a Core Linux Engineering function. Due to the rapid growth of the organisation and the Linux estate (6x growth in 12 months) the teams have had to grow by 200% to accommodate the pace of innovation, scale and engineering effort required to deliver G-Research’s next generation of platforms. One of our key milestones was the hiring of Gary Conway as Group Head of Linux & Platforms:
‘I joined G-Research 18 months ago. My first focus was to build out the Core Linux Engineering function, with a big focus on automation, tooling and testing frameworks as I could see the growth and appetite there was to make more use of Linux based infrastructure. The existing team was split into the Linux Platforms function with a focus on two key areas, the existing Linux Marketdata platforms and a newly formed Big Data engineering team responsible for building, running and scaling the newly created Hadoop, Kafka and ELK platforms. G-Research’s appetite to invest in the best technologies and people shows no signs of slowing down and it’s great to work in an environment where I am surrounded by some of the smartest people in the industry. The challenges we have set ourselves provide an environment in which our teams can collaborate, innovate and adopt some of the bleeding edge technologies on the market today. To meet these exciting and demanding challenges we intend to continue to make huge investments both in people and technology over the next two to three years. ’
Due to this growth we have recently created a new team of Engineers responsible for building our Kubernetes infrastructure. Currently a team of four, we’re actively looking for a new manager and additional engineers to join this team. The key focus is to build out our new PaaS platforms which will enable our Software Engineering teams to break up our current application stack into a microservices architecture. This is already in its early phase of adoption and will only accelerate as part of our four year roadmap to have distributed platforms capable of running across our hybrid cloud offering.
In 2017 we hired Alberto Romero as our Big Data Architect, previously a Hortonworks Big Data consultant for several large financial organisations. He has brought with him a wealth of experience and innovative ideas on how to build a world class large scale, distributed Hadoop platform:
‘G-Research ticked all the boxes in terms of the kind of opportunity I wanted to pursue: it has a start-up feel to it in that technical innovation drives business decisions, whilst working with extremely competent people preceded by great success and a mantra of never settling. At my previous company, I helped kick start the Big Data initiatives at a couple of the largest financial organisations, and scaled clusters to several hundreds. It’s a great challenge to be doing it all over again for G-Research, where the compute requirements are outstandingly high and the impact to be made is notable. It has been an interesting journey so far, and there are milestones ahead of us that will make an even greater impact than what has been seen to date. Adopting cloud-like approaches will make us more efficient and Agile respectively, so I can only see an exciting time ahead of us.’ – Alberto Romero
Some of the key highlight on our journey so far include:
- Adopting and implementing Infrastructure as Code
- Implementing CI and testing infrastructure to allow us to move towards a GitOps and immutable infrastructure model
- Scaling exponentially our Hadoop estate
- Implemented many Big Data services such as Spark, Hive, Druid, and Privacera
- Deployed multiple Kubernetes clusters running production critical applications
- Working with Security and Cloud engineering teams to plan how best to take advantage of a Hybrid Cloud platform
If any of the above interests you, we have openings and are currently looking for talented, enthusiastic and DevOps minded individuals to join our Big Data, Kubernetes and Linux engineering teams.