Data Engineer

About the Product

Our mission is the enablement of every organization and person with the technology to positively impact the health of planet Earth.

Persefoni is creating an all-in-one platform that allows organizations to measure, analyze, and reduce their Enterprise Carbon Footprint. Our goal is to provide our customers unprecedented visibility and insights into the impact their organization has on the environment. Leveraging the latest breakthroughs in data science and software, our technology will empower teams and leaders to mobilize their organizations to continuously improve their greenhouse gas emissions metrics.

Our Core Values

  • Sustainability – We are committed to sustainable business practices across our entire operation and culture. We go beyond achieving balance. We are a net-positive contributor to the environment, our employee’s lives, and the global community.
  • Impact – We are focused on and passionate about tackling the biggest and hardest problems that will have the greatest impact. We create significant, not incremental, solutions.
  • Collaboration – We are always aligned in our goals and efforts to create the most impactful technologies possible. Constant cooperation across our company, customers, and partners is our standard mode of operating.
  • Equality – We value and respect people and organizations of all backgrounds. Ours is a culture of innovation, creativity, diversity of thought, and inclusion.

Job Description

We are in search of a Data Engineer to join our team.

Successful candidates should have a minimum of 5 years of recent professional development experience at positions requiring the skills listed below with an emphasis on API Development with Python and ELT/Data Modeling with Databricks (Python/Spark), Snowflake, and dbt.

Our project entails implementing our pre-approved development targets and developing a robust and reusable code framework in order to deliver a variety of new features across our product lines according to our preferred architecture design and best practices.

Our data-engineering stack is a combination of GOLang and Python based frameworks and AWS SaaS Database/Storage/Analytic solutions. All development work will be managed via JIRA process sprints and Bitbucket Git-flow branch management.

Responsibilities

  • Collaborating with senior management and fellow developers to meet both technical and consumer requirements while maintaining regular communication of progress
  • Comfortable operating within an Agile team using SCRUM methodologies
  • Analyzing, transforming and providing data via micro-services and APIs
  • Database architecture, including structural and relational design
  • Data Integration between both internal and external data sources
  • Creating containerized Docker microservices targeted for AWS hosting
  • Integrating AWS services where beneficial for project needs
  • Staying abreast of developments in data related applications and programming languages

Skills

  • At least 5 years of experience analyzing requirements and designing new solutions for application and database components along with development and implementation experience
  • At least 2 years of experience with Python
  • 3 or more years of RESTful or SOAP web services
  • At least2 years of SQL database experience. AWS Aurora familiarity would be a plus.
  • At least 2 years of experience in designing scalable and robust data pipelines.
  • At least 2 years of experience with the end-to-end life cycle of Agile software development. Including the technical analysis of requirements, development of the software, troubleshooting and implementing PR and QA feedback.
  • Knowledge and understanding of cloud computing (ideally AWS) , PaaS design principles and micro services and containers.
  • Ability to manage efforts that require integration of multiple technology systems, operations, or processes
  • Experience collaborating via sprint planning, daily stand-ups, ticket management, sprint demos, and sprint retrospectives
  • Proficiency with Git / BitBucket, Git-flow branch management, Jira, and Docker all pluses
  • Good verbal, written, and interpersonal communication skills. Ability to communicate proactively across and within the team

Other Desired Qualifications

  • Experience with REST API development using Python (Flask)
  • Experience with enterprise system monitoring (Datadog preferred) and high availability architectures
  • Experience with Machine Learning implementation using Python Job Type:

Our Hiring Process:

  • Step 1: Initial candidate screening call with Recruiter
  • Step 2: Interview with member(s) of TurnKey Team
  • Step 3: Tech + cultural interview with a Tech Lead and Head of Data Engineering
  • Step 5: Offer extended
  • Step 6: Background check and on-boarding

Perks of becoming our team member:

  • Remote work and flexible PTO
  • Private health insurance
  • Sponsorship for conferences, continuing education, etc
  • Personal laptop (MacBook)
  • Working closely with an international team of scientists, engineers, platform architects, programmers and professionals
  • Do something morally benevolent!

Apply for this position

Drop files here or click to uploadMaximum allowed file size is 32 MB.
Allowed Type(s): .pdf, .doc, .docx