Data Engineer (Java/Python), Up to $2000
285 Viettel Complex Building - CMT8, Ward 12, District 10, Ho Chi Minh
Không xác định
2020-02-17 -> 2020-02-18
- A minimum 2 years experience with Python (or Java) is required.
- Working knowledge of message queue, streaming process, and scalable data stores.
- Experienced with data pipeline and workflow management, also big data tools: Azkaban, Luigi, Airflow, Hadoop, Spark, Kafka.
- Experienced with cloud-based system likes Google Cloud Platform / Amazon Web Services: BigQuery, DataFlow, Amazon EKS, EMR, Redshift.
- Good at math and SQL is a big plus.
- Tiki is extremely focusing on growing products to wider customer selection. This is not outside the goal of everything we have done, is to bring more happiness and convenience to our customer.
- As a member of Supply Chain Optimization team, we have the responsibility to drive core projects to help accelerate Tiki’s effectiveness in full-filling and managing their inventory, fastening delivery speed and making right investment decision for company budget. To be honest, we have to argue that the growing of customer selection would be our new challenges - the more selection, the more challenges in managing and optimizing things.
- Fortunately, our team is constantly iterating and standing together to solve problems. We found out many solution to deal with challenges. We play with Big Data, Machine Learning, and even Deep Learning.
- We know the road, but we're just getting started.
- And at the time solutions come, the complexity of data systems also grow up…
- We are looking for Data Engineer to stand together with us and take responsibility for building a platform with strong architecture. And since we are just at the beginning of the road, you can let your imagination run free. We encourage everyone to dare to try new things and even make some mistakes, after all, it is all part of life and learning.
- Responsibilities for Data Engineer:
- Create and maintain optimal data pipeline architecture.
- Automate manual processes, optimize data delivery, re-design infrastructure for greater scalability.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Google Cloud big data technologies.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work closely with product owner, data analyst and data scientist to strive for greater functionality in our data systems.
- Why you will want to work here:
- We are constantly iterating! There is no such best proposal for anything, no fastest API, no best machine learning models. We design, build, test, ship, and optimize, and test. Just a stream of improvements and tests.
- We have data-driven mindset, every point of changes must be tested to gain insights into its impacts on key metrics. It's a long process, but over time, we gradually learn and become confident in our approach.
- We love "best practices". Serving important features with high throughput always give us a hitch to research and apply best practices. Any experiment or optimization is always welcomed.
- We are both independent and open. We own our products. Technical problems would be discussed internal, but for difficult one, we could request other's help.