- Degree in Computer Science, Information Systems, or related field.
- Working experience of Spark in Big data projects for batch processing.
- Working experience on stream processing with any stream processing framework like Flink/Samza/Spark streaming.
- Working exposure handling millions of data events in a day.
- Working experience of Scala/Java programming.
- Working experience on queue implementations like Kafka/Pulsar.
- Working experience with unit testing frameworks.
- Working experience of NoSQL or GraphDB.
- Exposure to DevOps practices & Containerization is good to have.
- Excellent communication.
- Ability to work independently and as part of a team.
- Strong analytical skills.
Roles & Responsibilities
We are looking for a Data Engineer to implement the Data Architecture The main role is to ensure that the telemetry data collected by the application (through both UI layer and API layer) are processed and stored in such a way that the Data Visualization layer can be built providing necessary analytical reports and dashboards.
- Understand and Implement the Data Architecture
- Facilitating the development process and operations.
- Designing efficient practices.
About the project
It is one of the key projects our honorable PM has sought to execute. As part of this program, we are building a learning and solutioning platform for Government Officials / Executives. In the current plan, at least 25 million (2.5 cr) users are expecting to use this platform. It is a very large platform consisting of 5 core pillars; The text after the “-” (hyphen) is given only to help new readers relate to the broad functionality.
- Competency Hub - A unique framework through which all competency development (learning) can happen in a very structured and objective manner.
- Learning Hub – visualize as some form of Udemy or Coursera
- Discussion Hub - visualize as some form of Quora
- Network Hub - visualize as some form of LinkedIn
- Career Hub - visualize as some form of Naukri.