We are actively recruiting for passionate Big Data Developers.
Responsibilities
have influence on the architecture of data transformation pipelines and choice of tools
design and develop end to end ETL processes and integrations with Business processes
create and maintain the metadata catalogue
have a chance to develop data science and machine learning skills
work with application developers, devops, Business and data scientists in an agile setup
develop solutions as part of Big Data use-cases and deploy these on leading Cloud Platforms (AWS or Azure)
Skills and experience
min. 2 years of experience with at least 1 Relational Database and excellent SQL skills (preferably SQL Server) and 1 non-relational database (preferably Hadoop Hortonworks distribution)
min. 3 years of experience with ETL/ELT development in at least 2 technologies (preferably including SCALA/Python)
experience with system Integration including both REST/JSON as well as XML based integration
knowledge of CI/CD tools (eg. Jenkins, Bamboo, gerrit) and practice in at least one configuration management language (Puppet, Chef, Ansible, etc.)
min. 2 years of development role including some shell scripting and/or Java/Scala/Python
ability to write clean, modular, reusable code
ability to present technical concepts to technical & non-technical stakeholders
awareness of TDD and solid experience testing your own code with some automation and good understanding of integration testing
some real experience working in Agile mode e.g. SCRUM/SAFE