Salary range: Confidential | Contract type: Permanent
You have 18 days left to apply for this job.
The analytics engineer will multiply the effectiveness of everyone in the Data and BI team by creating, sharing, and making easy the data systems that workflows are built on, whilst contributing positively to team culture. Our vision is to be the smartest company in Africa and you will play a crucial part in that.
Analytic engineering at M-KOPA is new and evolving, sitting at the intersection of Data Science, BI/Analytics and Data Engineering. You’ll sit inside a small team with autonomy to improve the data teams workflows. Analytics engineering is a relatively new field, so we welcome applicants from Data Analysts who tend towards repeatability and automation to Data Engineers who are interested in the business domain, data scientists who’ve realized that clean data is more important than another 1000 epochs, or software engineers who want the perfect entry into the data space.
Location: Kenya, UK, Or Remote Working
- Ultimately the responsibility is to increase the speed and efficiency of common data team workflows. Below are some examples of how you may achieve this
- Improve the overall Data team’s workflow through knowledge sharing, proper documentation, and code review
- Deliver/review new automation frameworks within the team
- Work on efficient ingestion of new data into our data warehouse using tools such as Python, Spark, Airflow, ADF, Databricks, Azure Data Lake, etc
- Work on efficient storage of our data in the data warehouse, identifying performance improvements from query to table redesign.
- Work on the careful design of the schema’s, table names, data models, and practices within the data warehouse, creating a well-curated data set.
- Rewrite our data model using dbt or similar, and empower other analysts to use the frameworks, developing their skills with mentoring and good code review.
- Identify re-usable elements of downstream analytics and move into the repeatable data model
- Contribute to our internal python and R libraries, driving best practice
- You are a structured thinker, able to implement structure to simplify workflows and enable teams
- You are curious, from new tools in the data world to what is the right business definition for this metric
- Strong SQL Skills and transforming data
- Experience with collaborative data and software development via git and you’re comfortable in the command line
- Experience deploying code/models to production, ideally via automated deployments
- Experience with Data Warehouse Schema Design
- Experience with python, pandas, airflow
- Experience with dbt (and Jinja) or a similar tool
- Experience with using automated deployment pipelines
- Experience with distributed computing tools such as Spark
- Experience with Microsoft Azure (U-SQL, Data Factory, Data Lake), other Big Data tools (Hadoop, Spark), similar cloud providers and tools also a plus
- Familiarity with agile data ops development processes, unit testing, source control, continuous integration, etc.
- Experience with data visualization tools (such as Power BI or Tableau, GGPlot2, D3.js, Seaborn, Matplotlib, Dash, etc.)
Job RequirementsRequired education: Bachelor's degree
Required relevant work experience: 3 years
Required languages: English (Spoken: fluent | Written: fluent)
Please have a scan or photo of these documents ready when you start the application:Self-prepared CV file - the employer wants to see a CV that you have prepared yourself