G-Research continues to interview, hire, and onboard new staff remotely. Please do not hesitate to send in your application for a role.
Apply Infrastructure Engineering

Distributed Systems DevOps Engineer

G-Research is Europe’s leading quantitative finance research firm. We hire the brightest minds in the world to tackle some of the biggest questions in finance. We pair this expertise with machine learning, big data, and some of the most advanced technology available to predict movements in financial markets.

The role

We are a fast-growing, software-driven organisation with large scale deployments of distributed systems used to drive research and monitoring capabilities within the business.  We are looking for DevOps Engineers with Big Data systems specialisation who can help us design, engineer, and maintain our key distributed Logging and Data Streaming Platforms.

You will be coming on board to help drive the expansion of Big Data technologies with an SME focus and grow their reach within G-Research. You’ll help shape our infrastructure and build systems of high quality, with a focus on scalability, resiliency, security, and automation.

We are looking for highly talented self-starters with a strong DevOps/Linux engineering  background to join our expanding PaaS technology teams who can act as our distributed streaming and logging platforms SMEs, with the key focus on Apache Kafka, ELK and Splunk.

The responsibilities of the role include:

  • Owning the design, engineering, monitoring, and expansion of the platforms to help deliver a fully-featured and secure platform on top of which the business can integrate any application, including trading critical components
  • Engineering solutions to enable the platforms to run on public cloud, on-prem, and on container orchestration technology
  • Working closely with various Development and SRE teams to help define standards, best practices, and aide migration and on-boarding onto the platform
  • Investigating and staying up-to-date with the relevant and alternative technologies in this space to ensure we provide the best of breed products to meet the ever-changing and growing demands of the business
  • Managing, maintaining, troubleshooting, and performance tuning

Who are we looking for?

The ideal candidate will have the following:

  • Distributed Log indexing and searching (e.g. ELK, Splunk, Solar) experience
  • Experience with streaming/messaging systems (e.g. Apache Kafka, RabbitMQ, Redis)
  • Linux Engineering/DevOps background
  • Strong development experience with Java and Python
  • Strong systems engineering and integration experience
  • Good knowledge of orchestration and configuration management (e.g. Ansible, Terraform)
  • Experience in architecting, implementing and supporting applications running on Linux
  • Continuous integration and deployment – ideally Jenkins
  • Experience creating monitoring dashboards (Grafana, Prometheus, OpenTSDB)

The following would be advantageous but not necessary:

  • AWS/Cloud (e.g. EC2, S3, Route 53, Cloud Formation etc)
  • Hadoop
  • Batch and streaming job frameworks (e.g. Spark, Storm, Nifi)
  • NoSQL databases (HBase, Cassandra, MongoDB)
  • Time series databases (e.g. InfluxDB, OpenTSDB, Prometheus)
  • Kerberos, SSL certificates
  • Database administration and querying
  • Cluster managers (e.g. Docker, Apache Mesos, Kubernetes)

Why should you apply?

  • Highly competitive compensation plus annual discretionary bonus
  • Informal dress code and excellent work/life balance
  • Comprehensive healthcare and life assurance
  • 25 days holiday
  • 9% company pension contributions
  • Cycle-to-work scheme
  • Subsidised gym membership
  • Monthly company events
  • Central London office close to 5 stations and 6 tube lines

Stay up to-date with G-Research

Subscribe to our newsletter to receive news & updates

You can click here to read our privacy policy. You can unsubscribe at anytime.