Contractor Python Engineer (Data Platform Team)
- London, United Kingdom
- Contract
- Data Platform
About the role
As a Contract Data Platform Engineer, you’ll help shape and scale our data infrastructure, making analytics faster, more reliable, and cost-efficient. You’ll work with AWS, Snowflake, Python, and Terraform, building tooling, onboarding new data sources, and optimising pipelines.
You'll collaborate closely with teams across the business, ensuring our platform is secure, scalable, and easy to use.
Team mission
We want to maximise business value by optimising the analytics pipeline, enhancing Lendable's competitive advantage through data utilisation.
This means our Data Platform is designed to streamline the efficiency of analysis in extracting insights from data, while also ensuring cost control, compliance, and security.
What will you be doing?
This is a non-exhaustive list of the activities you would engage with in the role:
Develop new tooling for product teams to boost their efficiency
Work with the wider team in maintaining, evolving and scaling data infrastructure solutions on AWS and Snowflake.
Onboard new ingestion sources and maintain the smooth running and monitoring.
Ensure platform robustness through automated testing and monitoring of data pipelines aligned to the expected scaling of Lendable.
Collaborate with stakeholders to translate business requirements into scalable technical solutions.
Optimise existing CI/CD pipelines for faster cycle times and increased reliability.
Implement security best practices for data management and infrastructure on cloud platforms.
Design and deploy infrastructure as code to manage cloud resources efficiently.
Assist in the troubleshooting and resolution of production issues to minimise downtime and improve user satisfaction.
Skills we are looking for
Must have
Proficiency in software development, particularly in Python or a similar language.
Solid engineering practices, including automated testing, deployment systems, and configuration as code.
Experience with cloud services such as AWS, GCP, or equivalent (preference for AWS - S3, IAM, SNS, SQS, Athena, Glue, Kinesis).
Familiarity with infrastructure as code, preferably Terraform.
Knowledge of columnar databases, such as Snowflake.
Experience in developing and optimising CI/CD pipelines, with a preference for GitHub Actions.
Excellent communication skills for effective collaboration with business analysts and stakeholders, ensuring technical solutions meet business needs.
Experience with data ingestion tools, like Fivetran.
Experience with Data Orchestrator tools (Airflow, Prefect, etc.)
Nice to have
Expertise in data transformation, particularly with DBT.
Exposure to data visualisation tools.
Experience with Data Observability tools (Montecarlo, Great Expectations, etc.)
Experience with Data Catalog tools (Amundsen, OpenMetadata, etc.)
Exposure to deploying applications with Kubernetes.