The Big Data Analytics team is part of the Insights and Data Global Practice and has seen strong growth and continued success across a variety of projects and sectors.
We continue to grow and need to add to our talented team with other, experienced, Big Data Analytics SME’s. We very much try to encourage a ‘leave your grade at the door’ mentality so everyone feels comfortable contributing to our innovation. We also have a great sense of community, we encourage sharing of ideas, we sponsor meet ups for your peers and have a large calendar of social events throughout the year.
If you are keen on working on some truly cutting edge programmes, learning new technologies and techniques and joining a team with like minded big data architects, engineers and analysts, then click the link below and apply now.
The focus of your role:
We are looking to bring in a wide range of experience and expertise across grades. If you join us, you would be involved with the full life cycle of designing and delivering modern analytical data solutions, using a wide range of Open Source and COTS products. Our projects are varied, sometimes you may be asked to help define a client’s data transformation roadmap, other times you may be rolling your sleeves up and creating some innovative solution for a complex problem.
What you’ll do:
- You will work within or lead an engineering team to ;
- design, develop or interface into new or existing data feeds, in order to manipulate and transform data into desired target architectures / applications
- manage all aspects of day to day delivery in an Agile or DevOps environment
- You will work with clients to identify a data transformation roadmap
- Internally, you will help identify improved ways of working mentor junior and share knowledge
What you’ll bring:
This will be dependent upon the grade, but if you can demonstrate some or all of these characteristics then we’d really like to talk to you –
- Strong software engineering/development background (in Java or C# or COTS Products).
- Data manipulation (XML processing, customised ETL solutions) and data merging experience
- Data Transformation using ETL COTS tooling.
- Experience of engineering solutions to cope with Big (volume) and Fast (real/near real time) requirements
- Experience developing, interfacing or designing data centric applications.
- Experience of middleware integration applications, either commercial (i.e. Mulesoft) or Open Source
- Experience of DevOps type operations – use of source control (GitHub, Stash), Build tools (Puppet, Jenkins, etc), continuous integration, test driven development (i.e. Cucumber, Lettuce etc)
- Experience of Cloud Infrastructure usage and set up
Any Experience of:
- Hadoop ecosystem (Cloudera, Hortonworks, IBM, AWS, MicroSoft & Google)
- Hadoop administration, set up , security and tuning
- Data ingestion
- Kafka, Flume, Spark
- SQL on Hadoop tools
- Hawq and Greenplum
- ETL tools like Pentaho, Informatica BDE, Talend
- Elastic Search& SOLR
- NoSQL and other databases like MongoDB, Cassandra, Neo4J