Hadoop Platform Engineer
G-Research is Europe’s leading quantitative finance research firm. We hire the brightest minds in the world to tackle some of the biggest questions in finance. We pair this expertise with machine learning, big data, and some of the most advanced technology available to predict movements in financial markets.
You will be critical in driving the expansion of Hadoop and associated technologies and their reach within G-Research. You will use these technologies to build systems of high quality, with a focus on scalability, resiliency, security and automation.
Who are we looking for?
The ideal candidate will have experience with:
- Designing, running and troubleshooting Hadoop clusters
- Batch and streaming job frameworks (such as Spark, Storm)
- NoSQL databases (HBase, Cassandra, MongoDB)
- Middlewares and messaging systems (e.g. Kafka, RabbitMQ, FTL, Ultra Messaging)
- Linux OS core principles, performance and tuning
- Scripting (e.g. Bash, Python, Perl)
- Automation via the use of configuration management (such as Puppet or Chef) and orchestration tools (such as Ansible).
The following would be advantageous but not necessary:
- Service discovery (e.g. Zookeeper, Consul, etc)
- Data collection and querying (such as Flume, Sqoop, Hive)
- Time series databases (such as InfluxDB, OpenTSDB, or Prometheus)
- Kerberos, SSL certificates
- Other scalable distributed systems, such as Splunk
- Solid knowledge of basic network protocols (e.g. IP, UDP, TCP), OS network stacks. Multicast is a plus
- Database administration and querying (SQL)
- Cluster managers (e.g. Docker, Apache Mesos, Kubernetes)
Why should you apply?
- Highly competitive compensation plus annual discretionary bonus
- Informal dress code and excellent work/life balance
- Comprehensive healthcare and life assurance
- 25 days holiday
- 9% company pension contributions
- Cycle-to-work scheme
- Subsidised gym membership
- Monthly company events
- Central London office close to 5 stations and 6 tube lines