Applications are invited from eligible candidates for the Publicis Sapient Data Engineer Hiring Challenge.
About the Challenge
At Publicis Sapient, we believe that all the good things happen when great minds come together. And for 30 years, our secret to success has remained just that – through enabling our people to do the work that matters to them, we have been able to unleash an enduring culture of problem-solving creativity.
They are on a mission to transform the world, and you will be instrumental in shaping how they do it.
- Should have 4+ years of experience can participate in our hiring challenge.
- The candidate should
- have worked on Spark/Flink/Apache Beam
- have worked on Python/Scala as a coding language
- have worked on any MPP database (redshift, snowflake, bigquery etc)
- be able to write complex SQL queries
- 10 MCQs
- 1 Programming Question
- 1 SQL Question
|Compensation:||Best in industry|
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solutions.
Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to ensure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and Data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation
Mandatory Experience and Competencies:
- Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
- Minimum 1.5 years of experience in Big Data technologies
- Hands-on experience with the Hadoop stack – HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge of real-time data pipelines is added advantage.
- Strong experience in at least of the programming languages Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQL DW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
- Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience
- Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
- Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Microservices architectures
- Performance tuning and optimization of data pipelines
- CI/CD – Infra provisioning on the cloud, auto-build & deployment pipelines, code quality
- Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
- Cloud data specialty and other related Big data technology certifications
- Strong written and verbal communication skills
- Articulation skills
- Good team player
- Self-starter who requires minimal oversight
- Ability to prioritize and manage multiple tasks
- Process orientation and the ability to define and set up processes
How to Register?
Interested applicants can apply for the challenge through this link.
Jan 30, 2022