Skip to main content

Posts

Utility Corridor Management using Machine Learning

At Data Tapestry, our team's expertise spans a variety of specialties. While we've been able to apply NLP techniques, forecasting, and predictive analytics to many problems, most recently our team had to work with image data and the complexities that it presents. We combined resources with unmanned imaging experts at Skytec, LLC to create a solution for overgrowth and vegetation management in utility corridors.  Damages in these areas due to overgrowth can occur without warning. Tower damage and power outages can cost millions of dollars in repairs and regulatory fines. It is even more important to detect these encroachments since an electricity arc or flashover can occur within less than 15 feet of power lines, thereby damaging equipment or causing fire to nearby vegetation. Unfortunately, manual efforts to monitor overgrowth can be extremely manpower intensive, expensive, and inefficient. Our Solution Imaging experts at Skytec provide aerial photos of utility corridors via un
Recent posts

Auditing Hospice Care Documentation

Data Tapestry has a large footprint in the healthcare industry. With over 8 years in experience in hospice care, we’ve noticed some large gaps in analytic capabilities in the hospice care field. From maintaining regulatory compliance to managing patient transitions with care, there are many delicate challenges in hospice care that are difficult to manage without the proper tools.   One challenge in particular is efficient documentation auditing over the course of a patient's stay. The documentation needs to be complete and relevant to the level of care prescribed as well as the level of care executed. Many times, this work is left to case workers or quality control departments where there is a constant feedback loop of reviewing the submitted documentation and then re-sending the documents that need to be updated.   With our documentation solution, provider notes can be continuously audited and checked for completion so there is no backlog on getting notes updated via a case worker
Here at Data Tapestry we are here to help organizations looking to understand their data and make improvements to their organization’s customer service, turnover, or solve one of the many other challenges facing organizations today. Some of our specialties include NLP analytics, developing data system architecture tailored for analytics, predictive analytics, and applying applications to simplify data. With some of our current and previous clients we have been able to come up with solutions to assist with turnover by automating the process of analyzing employee exit surveys, analyzing employee efficiency, and utility corridor management using machine learning. We have also worked on a project for integration of financial and contract management systems to improve allocation to company’s KPI, as well as parking assignments for universities during sports seasons to name a few. We are able to tackle a variety of software customizations as well as analyze data and provide solutions to comm

Data Discovery and Understanding across an Organization

Much of what we do at Data Tapestry is helping our clients gain an understanding of their own data. If you work at a large organization, data access can be limited. This means you may not know what questions to ask of your data or what you can measure with your data. Hillary Rivera walks us through how she partnered with multiple stakeholders to gain insight from a variety of data sources. Can you tell us about your background and current role at Data Tapestry? I started out my career thinking I would eventually go to medical school, but I quickly realized I wanted to work in analytics. I studied public health and worked in healthcare data management for a while. Then I went back to school to study business analytics at UT. Since then, I’ve enjoyed working in ecommerce and software and technology. Now I’m back in healthcare working as a data scientist at Data Tapestry.   Tell me about how your project started with the client. I had a unique experience with my client because I had two v

Automating Visualizations and Implementing Standardized Data Collection Practices

Creating automated visualizations can be difficult when working data that has not been standardized. In a large hospital system, standardization requires multi-level communication across many departments as well as strict adherence to those standards so that processes can be implemented. Alex Ratliff talks us through how he created a dashboard around ever changing standardization issues. Tell me about your background and role at Data Tapestry. I’m currently a data scientist at Data Tapestry. My background is in math and analytics. During the first three years of my career, I worked at a company that contracted with the department of defense. While I was there, I worked on making dashboards to monitor data flow to make sure the data was being processed correctly. There were different sites that make data transfers. Our job was to make sure each job had the proper amount of bandwidth. I mostly used R and SQL to manage that.   Tell me about the dashboard project you worked on. When I star

Transforming and Accessing Data through Custom Built Pipelines

One of the biggest hurdles in data analysis is just getting access to data in the first place. At Data Tapestry, we offer end-to-end analytics services beginning with data acquisition, performing analytics, and providing end user products. Keith Shook walks us through how to maintain data security and integrity when dealing with a variety of situations. Tell us a little about your background and your role at Data Tapestry. Currently, I’m a senior data engineer, but I actually started off as an intern ingesting data into Postgres and SQL databases. I then shifted into visualization using D3, a javascript library, but we found that Tableau was much more efficient. Since then, I’ve gained a variety of experience using scala, Hive, AWS, and building clusters.   Can you walk us through a project you’ve worked on? Data engineering is pretty straightforward as far as the process goes. You get the data, ingest it into the database, and then hand it off to the data scientist. You have to be fle

From Spreadsheets to Tableau: Creating Dynamic Data Visualizations

Data visualization can be a tough undertaking for many organizations. Choosing which metrics and how to visualize them in an effective manner can be challenging when you are limited to just tools like excel. Additionally, how do you keep them up to date with the everchanging demands of your business? Alexa Tipton explains how she partnered with a client to achieve just that.    Can you tell us about your background and current role at Data Tapestry? Currently, I’m a data scientist at Data Tapestry. Before that, I was a research assistant at UT Knoxville for the center of ultra-wide area resilient electric networks or CURENT. While there, I used a number of techniques including text mining, machine learning, and NLP to analyze tweets from Twitter. The goal was to understand the public’s sentiment towards their energy providers around natural disasters, but more importantly improve electric grid structures overall as part of the national science foundation project.  Tell me about how you