Technology & Operations
Big Data Architect (Hadoop, Spark, Kafka)
Imagine introducing the next generation of technology to a platform in real-time.
G-Research is a leading quantitative research and technology company. By using the latest scientific techniques, we produce world-beating predictive research and build advanced technology to analyse the world’s data.
We are developing the next generation of infrastructure that can effortlessly scale and flex. This capability allows us to adapt to the demanding needs of the business to process and analyse large and varied data sets at speed.
We are a fast-growing software organisation with large distributed Big Data systems. We’re looking for a visionary Big Data Architect who can help us design and build Big Data solutions, on premise and in the cloud.
You will act as the link between various engineers, architects, operations specialists and data scientists by helping to define, prototype and oversee use cases from a practical perspective. You will work on projects to evaluate and implement new Big Data frameworks and tools.
You will also be involved with the full lifecycle of Big Data solutions, working closely with the Big Data Platform Engineering team who build and operate the live platforms. This will include:
- Creating the requirements analysis
- Evaluating new vendors/technologies and platform selection
- Design of the technical architecture
- Application design
- Development, testing, and deployment of the proposed solution
There will be significant time spent with vendors, the open source community and working with our Technology Innovation Group and the CTO.
Who are we looking for?
Ideally you’ll have:
- Deep knowledge of Linux
- Scripting/Coding experience (e.g. Bash, Python, Perl)
- Excellent experience with the Hadoop ecosystems (such as HDFS, YARN and/or Hive)
- Strong experience with streaming and stream processing frameworks (such as Spark, Storm, Flink, Kafka and/or Kinesis)
- Good knowledge of at least one of the following programming languages: Python, Scala, Go, Kotlin, Java
- Experience with NoSQL databases (such as HBase, Cassandra and/or MongoDB)
- An understanding of data management approaches, data design patterns and knowledge of UML modelling
- Knowledge of traditional security protocols (such as Kerberos, SAML and/or OAuth)
- An understanding of log management solutions, ideally Elasticsearch and/or Solr
- Configuration and orchestration management experience (such as Ansible, Puppet and/or Chef)
- Experience with public cloud-based technologies (such as Kubernetes, AWS, GCP, Azure, and/or Openstack)
Why should you apply?
- Highly competitive compensation plus annual discretionary bonus
- Informal dress code and excellent work/life balance
- Comprehensive healthcare and life assurance
- 25 days holiday
- Monthly company events
- Central London office close to 5 stations and 6 tube lines