Linux DevOps Engineer (Hadoop, Cloud, Data Services) £82k- Asset Management - Surrey
The successful candidate will join The Data Services team and they are looking for a self-motivated professional who is passionate about Data Technology.
You will have extensive experience in the design, engineering, implementation, tuning and automation of business critical data platforms - both on premise and ideally also in the Cloud.
Your primary area of focus will be Big Data technology - primarily platforms based on the Hadoop ecosystem. Experience with reporting platforms and designing data pipelines is essential. Additional experience in Kafka and NoSQL systems is desired. The candidate will be expected to be a key member of the DevOps data group that is responsible for delivering and supporting all of our client's data platforms, playing a role in informing future technology direction.
As an individual contributor, you will have to operate in a consultative capacity across the global Data Services team and with development customers, working to an agile delivery practice, producing project work and product features in sprint cycles. The role covers Hadoop and Kafka systems running on Linux operating systems, and data visualization products running on Linux and Windows operating systems. The candidate should be experienced in data solution design, engineering, implementation, tuning and automation with Hadoop on secured clusters ideally using CDH 5.x
As a DevOps role this is a unique opportunity that is both hands-on with data technology and informing future technology direction, the professional being required to provide 3rd line operations support to global team members when needed; as well as being strategic, requiring integral involvement for the delivery of larger scale business and engineering projects. Preferably candidates will also have experience engineering and administrating reporting and streaming data platforms.
- Work with global team members in a consultative capacity to solve complicated data platform challenges and provide technical leadership and mentoring on root cause analysis and problem resolution. As Emerging Data Services engineering lead be responsible for the global engineering framework and grow global team engineering capability through mentoring, working closely with regional Data Services managers.
- Research, evaluate and recommend Data technologies and products (existing and emerging / on premise and cloud) or the tools / technologies to manage and support Data technologies, products and pipelines.
- Research, evaluate and make available patches, new releases and functionality enhancements.
- Work closely with product owners on product engineering roadmaps.
- Effectively multi-task, handling day-to-day assignments given moderate direction and supervision.
- Effective communicator and ability to deal with cultural diversity within a geographically dispersed team.
Experience and Qualifications Required
- Experience of Linux operating system administration.
- Experience of engineering and administering Hadoop based technology
- Ideally experience with cloud technology platforms and cloud migration projects.
- Proficient with full lifecycle project on one or more - Kafka, Oracle Business Intelligence, Tableau.
- The ideal candidate will have experience with open source technology - Spark, Impala, Kudu, etc. Proficient in designing and developing full stack data infrastructure solutions.
- Proficiency required with common developer tool sets and scripting such as Java, Shell, Python, REST etc.
- Proficiency required with Configuration Management tools and Infrastructure as code software such as Ansible and Terraform.
- Familiar with continuous integration and delivery concepts and tools
£82,000 Basic (depending on experience) + Bonus + Benefits