Job Details
Experience Needed:
Career Level:
Education Level:
Salary:
Job Categories:
Skills And Tools:
Job Description
- BigData Engineer will be responsible setting up and maintain the bigdata cluster environment (Hadoop, Spark, Storm, Kafka, ZooKeeper, Solr ,.. etc)
- Perform advanced analytics
- Develop scripts and warping business/technical function API's integrating with Big-Data technologies
- Develop business use-cases utilizing the big-data platform.
- Build high-performance algorithms, prototypes, predictive models and proof of concepts
- Research opportunities for data acquisition and new uses for existing data
- Employ a variety of languages and tools (e.g. scripting languages, Java, Scala)
- Recommend ways to improve data reliability, efficiency and quality.
- Participate in projects and customer on-site related activities and tasks.
Job Requirements
- Hadoop Skills, deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backup
- Nice to Have, Cloudera Administration or developer Certificate
- Has a practical knowledge in with Cloudera Hadoop Distribution components
- Strong understanding of Hadoop based analytics with knowledge of HDFS, MapReduce, Hive, Impala, Hbase, Kafka,Flume
- Ability to be mobilized for work in other cities or countries according to the business need.