Skip to main content

Transforming and Accessing Data through Custom Built Pipelines

One of the biggest hurdles in data analysis is just getting access to data in the first place. At Data Tapestry, we offer end-to-end analytics services beginning with data acquisition, performing analytics, and providing end user products. Keith Shook walks us through how to maintain data security and integrity when dealing with a variety of situations.


Tell us a little about your background and your role at Data Tapestry.

Currently, I’m a senior data engineer, but I actually started off as an intern ingesting data into Postgres and SQL databases. I then shifted into visualization using D3, a javascript library, but we found that Tableau was much more efficient. Since then, I’ve gained a variety of experience using scala, Hive, AWS, and building clusters.

 

Can you walk us through a project you’ve worked on?

Data engineering is pretty straightforward as far as the process goes. You get the data, ingest it into the database, and then hand it off to the data scientist. You have to be flexible with how you approach the process, because you can get data in a variety of different formats. Sometimes you know what the data looks like or what it should look like. Other times there’s a lot of cleaning involved.

 

During one project, I actually got the data from a physical flash drive. The company has a fleet of trucks that install electric dog fences. There was some sensitive geo-location data involving where the drivers stopped and even where they lived. Their data was being housed in Microsoft SQL server which is not the best structure. So, I exported all of the data to CSVs and imported it into Postgres and then gave access to it.


What steps do you take to protect sensitive information like that?

Data security is very important to me and my client. We follow best practices to ensure the data is secure at all times. As an additional measure, I treat all data like personally identifiable information (PII) so that there are no mishaps.

 

How do you interface with the client?

It depends on the project. Sometimes I serve in a small capacity, or I’ll be an embedded part of the team. Other times, I may be leading and consulting on the project.

 

What types of software and technology do you use for data ingestion?

There’s not really a particular toolset for all the jobs. I write a custom python script for each. For many projects, I’d visualize the data in Tableau and that would allow me to see what was actually in the data. Sometimes I had to create calculated fields and join different tables together in order to create a dataset that made sense.

 

What’s been the most interesting engineering challenge you’ve come across?

I’ve had to work with HL7 data. It’s a stream of data that hospitals create by recording various message types. The message types can be things like, there was a discharge on June 8th at 12:00pm.  They are highly specialized data streams that require custom built pipelines to deal with the weird formats.


 

Comments

Popular posts from this blog

Reducing Workforce Turnover using Anomaly Detection and NLP

Maintaining an engaged workforce is essential to any organization looking to not only minimize the costs associated with hiring new personnel but also maximize productivity through engaged employees. Our senior data scientist, Jeremiah Lowhorn, partnered with one of our clients to analyze the risk factors that lead to employee turnover and how to mitigate them. We sat down with him to learn about how he was uniquely suited to solve such a complex problem.
Can you tell us about your background and current role at Data Tapestry?
My title is senior data scientist, and I’m currently working on my second master of science, this time in information management. Before Data Tapestry, I worked as a senior software engineer at Cigna focusing mainly on big data and data science projects. Prior to that role, I was working at US cellular as a data scientist. While there, I focused mainly on time series analysis and predictive modeling.
Tell me about the problem you were asked to solve and what were t…

Consulting, Designing, and Coding: the Many Roles of a Developer

At Data Tapestry, we pride ourselves in deeply understanding your business needs and delivering solutions in a hands-on or consulting capacity. Our staff not only performs analytics, but we also build and support custom software products. Philip Vacarro, our full-stack software developer, explains how he partners with multiple clients in different capacities to deliver production software solutions, advise on data architecture, and provide product support.
Can you tell us about your background and current role at Data Tapestry?
I’m a full stack software developer, and I serve as the first stop for new clients when it comes to consulting on data architecture and other products we’ve built. I’ve worked as a software engineer for Siemens in the research and development. We focused on interventional imaging. After that, I worked as a full stack developer at ORNL. We collected terabytes of data submitted from scientists all over the world.  The data was then centralized in an application so …