Job Details
Experience Needed:
Career Level:
Education Level:
Salary:
Job Categories:
Skills And Tools:
Job Description
- Define the architecture, scope and deliver various Big Data solutions.
- Support other teams by providing guidance on data modeling, data usage, processing and how they can best leverage the platform
- Build scalable data pipelines to ingest data from a variety of data sources, identify critical data elements and define data quality rules.
- Leverage Spark/Hadoop ecosystem knowledge to design and develop capabilities to deliver innovative and improved data solutions.
- Provide insights on area of improvements including Data Governance, best practices, large scale processing
- Support the bug fixing and performance analysis along the data pipeline
Job Requirements
- 3+ years of experience as software engineer, with strong skills in at least one programming language is mandatory, preferably Scala or Java or Python
- 1+ year of experience with Spark on Hadoop, EMR etc
- Experience working with real time data processing using Kafka, Spark Streaming or similar technology
- Experience with NoSQL Databases (Cassandra , Elasticsearch,…etc. ).
- Experience with distributed systems and design/implementation for reliability, availability, scalability and performance
- Experience with IoT Enablement Applications & Platforms like (thingsBoard , thingswork , etc) is a huge plus.
- IoT Wireless Communication Technologies (LoRA, Sigfox, NB-IoT) is a plus.
- Creative and innovative approach to problem-solving
- Prefer to Have:
- Familiarity with containerized platform like Docker and Kubernetes
- Experience working with Hive, Presto or other querying frameworks
Featured Jobs
Similar Jobs
- Senior Data Warehousing & Busi...The Micro, Small & Medium Enterprise Development Agency - Dokki, Giza3 days ago