Apply Infrastructure Engineering

Big Data Platform Engineer

Location : Dallas, TX

G-Research offers remote working within the State of Texas, bringing people together in the office for a number of days every month.

G-Research is Europe’s leading quantitative finance research firm. We hire the brightest minds in the world to tackle some of the biggest questions in finance. We pair this expertise with machine learning, big data, and some of the most advanced technology available to predict movements in financial markets.

This role is based in G-Research’s new office in Dallas. Opened in 2022, the Dallas office is a key infrastructure hub where we work on the latest cloud technologies in a cutting-edge environment.

The role

Our business focuses on forecasting financial markets and we use an ever-growing amount of data and processing to achieve this. 

The Big Data Platform team builds the platforms - primarily Spark and Hadoop-based - that are key to enabling business-critical functions. It is responsible for cutting-edge, petabyte-scale clusters which underpin diverse use-cases such as quantitative research, risk analysis and cyber security. 

It works closely with a variety of different teams – including Quantitative Research, IaaS Engineering, PaaS, and various development and security teams. It is vital to continue those close relationships in order to understand their use-cases and challenges, and help them get the most out of Big Data Platforms.

At such a scale, automation is key, and there is a significant focus on configuration management, orchestration, Infrastructure as Code and CI/CD for this role.  

In addition to this, G-Research is embarking on a transformational journey in how it delivers infrastructure using open source technologies. We are investing heavily in a hybrid cloud platform on which the next generation of applications and distributed platforms will be built to deliver development efficiencies.  This provides a unique opportunity to re-invent how we operate our big data ecosystem, and continue to modernise and improve our tooling and practices.

Key responsibilities of the role include:

  • Helping shape and engineer the Big Data Platform and ensuring it is scalable, stable, and performant, as well as easy to use and maintain
  • Providing metrics, documentation, and self-service infrastructure to help our users work at pace and get the most out of the platform
  • Defining automation standards, frameworks and reporting
  • Managing a complex ecosystem of distributed technologies which work together
  • Enabling the company to grow its Big Data capabilities using Infrastructure-as-Code principles
  • Building infrastructure automation
  • Developing Python libraries
  • Developing CI/CD pipelines
  • Using advanced troubleshooting skills to diagnose and fix problems

Who are we looking for?

The successful candidate will be an enthusiastic Platform Engineer who is able to build an automated, scalable, reliable and high performing Big Data Platform. They will work well as part of a team, but also be able to propose and run their own projects and improvement initiatives​.

The team has a mixed set of skills that complement each other.  Typically, team members will have strong abilities in one of infrastructure automation, Big Data technologies or software development, and be keen to expand their expertise in other areas. 

The ideal candidate will have skills and experience in at least one of the following:

  • Automation tooling (exposure to a majority of the following tooling, or the ability to quickly pick-up new tooling):
  • Ansible
  • Kubernetes
  • Jenkins (CI/CD)
  • Linux OS core principles, performance and tuning
  • Cloud technologies, e.g. Terraform, AWS, OpenStack
  • Big Data technologies (exposure to a number of the following Big Data components, including building, tuning, troubleshooting clusters):
  • Hadoop ecosystem, e.g. HDFS, Yarn, Zookeeper, Hive
  • Batch and streaming job frameworks, e.g. Spark, Storm, Flink
  • NoSQL databases, e.g. HBase
  • Security components, e.g. Kerberos, SSL certificates
  • Cloud technologies, e.g. AWS EMR, CDP Public Cloud
  • Coding:
  • Experience with scripting or programming languages, Python is preferred, but capability in any high-level languages is acceptable
  • Understanding of unit testing

Why should you apply?

  • Market-leading compensation plus annual discretionary bonus
  • Informal dress code and excellent work/life balance
  • Paid time off, including sick days, military leave, and family and medical leave
  • Summer working hours, equivalent of 4 additional days of paid leave
  • Generous 401(k) plan
  • 12-weeks’ fully paid parental leave
  • Medical and Prescription, Dental, and Vision insurance
  • Life and Accidental Death & Dismemberment (AD&D) insurance
  • Employee Assistance and Wellness programs

G-Research is committed to cultivating and preserving an inclusive work environment. We are an ideas-driven business and we place great value on diversity of experience and opinions.

We want to ensure that applicants receive a recruitment experience that enables them to perform at their best. If you have a disability or special need that requires accommodation please let us know in the relevant section.


Stay up to-date with G-Research

Subscribe to our newsletter to receive news & updates

You can click here to read our privacy policy. You can unsubscribe at anytime.