At MediSpend, we are on a mission to transform and simplify how the life science industry complies with global healthcare industry regulations. MediSpend Global Compliance Solutions are recognized market leaders fully integrated within a first-mover, born in the cloud technology platform. We’re gearing up for the next phase of growth, looking to onboard hungry, humble, and smart people who thrive on solving business problems with innovative technology. Read on to see what’s different about this opportunity and begin to visualize how you will advance your career by joining our team.
Check this out:
- Get to know the regulatory compliance and healthcare entity data domain
- Work with “big data” architects to create analytic data structures
- Use modern data wrangling tools and techniques to inspect and transform data
- Participate in the renaissance of the functional programming paradigm within the industry’s hottest data transformation/analytics framework
- Learn and practice the tricks to working with data at scale
A day in the life:
You are a Data Engineer at MediSpend. You practice ETL and ELT. You create data transformations to standardize three aggregate spend transaction files into a standard format. You build innovative algorithms to detect duplicate healthcare providers. You figure out that a particular file won’t load properly because there are missing delimiters. You build custom crosswalks that transform client specific data to standardized formats. You build a set of routines to create a set of denormalized data structures that will enhance analytic execution speeds. You help build operational metrics to report on data quality and volumes. You aid the product team to mine for client data inconsistencies.
What you bring to the table:
You’re a professional with a great blend of practical experience, education, and achievement. You’re efficient at getting your points across in written and verbal mediums. You are passionate about working with data. Exposure to the healthcare data domain is a plus, but not required.
Significant experience with ETL/ELT tools and platforms is expected. We’re a Java shop, so CloverETL, Pentaho, Talend experience helps. If you’ve used modern data wrangling tools like Paxata, Tamr, or Trifacta, even better. Spark experience is a plus. You should be efficient with at least one programming language such as Java, Scala, or Python. SQL skills are a given. Knowledge of data storage systems which include traditional RDBMS (Oracle, PostgreSQL, MariaDB), analytic data stores (Vertica, Greenplum, Redshift, Presto), and object stores (AWS S3, OpenStack Swift) helps also.
You’ve been in the trenches delivering commercial applications requiring data accuracy. You have contributed to the development of repeatable, modern data processing pipelines.
Growth story, startup feel, life science domain, passionate technologists, and born in the cloud