Skip to main content

Auditing Hospice Care Documentation

Data Tapestry has a large footprint in the healthcare industry. With over 8 years in experience in hospice care, we’ve noticed some large gaps in analytic capabilities in the hospice care field. From maintaining regulatory compliance to managing patient transitions with care, there are many delicate challenges in hospice care that are difficult to manage without the proper tools.

 

One challenge in particular is efficient documentation auditing over the course of a patient's stay. The documentation needs to be complete and relevant to the level of care prescribed as well as the level of care executed. Many times, this work is left to case workers or quality control departments where there is a constant feedback loop of reviewing the submitted documentation and then re-sending the documents that need to be updated.

 

With our documentation solution, provider notes can be continuously audited and checked for completion so there is no backlog on getting notes updated via a case worker or quality control department. Automating this type of work can allow your workforce to focus on more complex issues.

 

Currently, our system is EMR agnostic and has been prototyped on product review data. We ingest and clean the text data and show a display of basic stats by author ID. The reviewCount column shows how many documents that author has written, and the similarity measures how similar the documents are across that particular author.

 

 


To visualize the similarity, you can hover over any point in the heatmap and see the measure of similarity between any two documents for that author. Ideally, you’d want to see a mostly blue heatmap indicating a low level of similarity between any two given documents.





 

To analyze the text further, you can click on a point within the heat map. For example, documents 73 and 30 are 27.59% similar. A look at the raw text of each, shown below, indicates that there are some words in common, but overall, they are distinctly different.

 

 

Much of this analysis can be customized to fit your organization’s needs. We can adjust how often documentation should be reviewed, particular thresholds for similarity scores as well as changes to the interface. To find out more about our solution and how it can fit into your business, email us at business@datatapestry.ai or visit our website at www.datatapestry.ai.

 

 

 

 

 


Comments

  1. The authentic conversion was applied to roughly 50 late-model Bally slot machines. Because the typical machine stopped the reels routinely in lower than 10 seconds, weights were added to the mechanical timers to prolong the automatic stopping of the reels. By the time the New Jersey Alcoholic Beverages Commission had accredited the conversion to be used in New Jersey arcades, the word was out and every other distributor began adding ability stops. The machines 우리카지노 were a huge hit on the Jersey Shore and the remaining unconverted Bally machines were destroyed as that they had become immediately obsolete. Every reputable slots casino will supply players the choice to play slots free of charge.

    ReplyDelete

Post a Comment

Popular posts from this blog

From Spreadsheets to Tableau: Creating Dynamic Data Visualizations

Data visualization can be a tough undertaking for many organizations. Choosing which metrics and how to visualize them in an effective manner can be challenging when you are limited to just tools like excel. Additionally, how do you keep them up to date with the everchanging demands of your business? Alexa Tipton explains how she partnered with a client to achieve just that.    Can you tell us about your background and current role at Data Tapestry? Currently, I’m a data scientist at Data Tapestry. Before that, I was a research assistant at UT Knoxville for the center of ultra-wide area resilient electric networks or CURENT. While there, I used a number of techniques including text mining, machine learning, and NLP to analyze tweets from Twitter. The goal was to understand the public’s sentiment towards their energy providers around natural disasters, but more importantly improve electric grid structures overall as part of the national science foundation project.  Tell me about how you

Reducing Workforce Turnover using Anomaly Detection and NLP

Maintaining an engaged workforce is essential to any organization looking to not only minimize the costs associated with hiring new personnel but also maximize productivity through engaged employees. Our senior data scientist, Jeremiah Lowhorn, partnered with one of our clients to analyze the risk factors that lead to employee turnover and how to mitigate them. We sat down with him to learn about how he was uniquely suited to solve such a complex problem. Can you tell us about your background and current role at Data Tapestry? My title is senior data scientist, and I’m currently working on my second master of science, this time in information management. Before Data Tapestry, I worked as a senior software engineer at Cigna focusing mainly on big data and data science projects. Prior to that role, I was working at US cellular as a data scientist. While there, I focused mainly on time series analysis and predictive modeling. Tell me about the problem you were asked to solve and what were

Transforming and Accessing Data through Custom Built Pipelines

One of the biggest hurdles in data analysis is just getting access to data in the first place. At Data Tapestry, we offer end-to-end analytics services beginning with data acquisition, performing analytics, and providing end user products. Keith Shook walks us through how to maintain data security and integrity when dealing with a variety of situations. Tell us a little about your background and your role at Data Tapestry. Currently, I’m a senior data engineer, but I actually started off as an intern ingesting data into Postgres and SQL databases. I then shifted into visualization using D3, a javascript library, but we found that Tableau was much more efficient. Since then, I’ve gained a variety of experience using scala, Hive, AWS, and building clusters.   Can you walk us through a project you’ve worked on? Data engineering is pretty straightforward as far as the process goes. You get the data, ingest it into the database, and then hand it off to the data scientist. You have to be fle