Welcome to Data IQ®
TeleTracking's patient flow solutions produce a tremendous amount of data. Knowing what data to track, measure, and analyze can seem like an impossible challenge. Our team of experts has been driving change and improving patient flow for nearly three decades. That collective knowledge and expertise is now infused into Data IQ®, our next-generation, cloud-hosted, and powerful analytics solution.
Why Data IQ®
Reduce Decision Making Silos with Enterprise-Wide Visibility
With visibility across the enterprise, staff can better understand the full picture of operations, identify workflow silos that may be impacting other departments, and quickly determine if issues are systemic or site-specific.
Identify Bottlenecks Faster and with Greater Precision
Operational data is presented in an easy-to-consume format with the ability to drill from the enterprise down to the individual patient or employee for deeper analysis
Use Real-Time, Shared Situational Awareness to Address Emergent Concerns
With Data IQ®, staff have real-time visibility and a shared understanding of changing conditions and so can work together to improve operations in the moment, remedy acute bottlenecks before they become system-wide concerns, and act cohesively in high-pressure situations.
Leverage TeleTracking’s 30 Years of Patient Flow Expertise and Improve Operational Performance
TeleTracking’s best practices and three decades of expertise have been translated into a best-in-class reporting package. This reporting package offers a blueprint for operational improvement, helps your teams track the metrics that matter most, and can be customized based on your organization’s needs.
Scaling to Size
As organizations expand with the addition of new facilities, Data IQ® scales accordingly. Cloud hosting minimizes on-site maintenance costs and reduces the number of on-site IT resources needed to support the system. And, because the solution is hosted in the Cloud, access to new features and updates is expedited.
Key Features
Subscriptions
Subscriptions deliver snapshots of dashboards and/or views via email. Project leads, supervisors, and analysts can subscribe other Data IQ® users to the dashboards or views that are meaningful to the work they do. Subscriptions are delivered on a recurring schedule and can be removed at any time.
Data-driven Alerts
Unlike subscriptions, which are delivered on a scheduled basis, data-driven alerts are delivered when key thresholds are reached. The data and the threshold value that trigger the alert are specified when the alert is created.
Shared Comments
Viewers, editors, and publishers can share observations and collaborate on dashboards and/or views through comments. Comments are specific to the dashboard and remain until they are removed by the author.
Ad-hoc Reporting
Publishers and editors will have direct access to a web-based report publishing tool to create their own content. These users will be able to create flat table reports, interactive reports, and dashboards while connected to TeleTracking published data sources.
Enterprise Dashboards
Data IQ® provides a real-time view of operations across a health system — enabling teams to make critical patient care decisions in the moment. Enterprise dashboards are intended to display on large, wall-mounted monitors in your Command Center or other key visible areas. To facilitate in-the-moment decision making, the dashboards are set to look for and display any new data every 60 seconds. TeleTracking’s industry leading best practices are embedded into these reports as goals, allowing users to quickly identify areas of opportunity. See Command Center Dashboards
Desktop Reports
Data IQ® desktop reports provide historical reporting, drill down views, enterprise-wide visibility, and full customization. These reports are built to be interacted with, so that users build operationally valuable relationships that can answer difficult questions with a few clicks. Each report highlights different components of healthcare operations and provide insight at each level. TeleTracking’s industry leading best practices are embedded into these reports as goals, allowing users to quickly identify areas of opportunity. See Interactive Dashboards.
Sign In
Access to Data IQ® is via the Analytics menu in Operations IQ® Platform. Use the following steps to sign in.
Go to the Operations IQ® Platform using the URL provided by your system administrator to open the Sign In page.
Consider creating a shortcut to the Operations IQ® Platform. on your desktop to make signing in faster.
Enter the user name and password you use within your health system, and then click Sign in. The TeleTracking IQ® platform Home page opens.
Select the Analytics tab, then select Data IQ®. The page will open in a new tab
Enter the email address you use to sign in to your health system's network, then click Sign in. The Projects page appears.
To view the interactive reports and command center dashboards built by TeleTracking, select the TeleTracking folder. To work with your health system's customized content, select your health system's folder.
TeleTracking Best Practices
TeleTracking Technologies, Inc. has developed best practices that correlate with specific data points across the care continuum. Based on 30+ years of data collection, the following metrics are recommended as a standard of measure for their corresponding data points.
Access
Access best practices align operational and clinical resources to accelerate the processes that take place between a patient's arrival and their placement in the correct bed.
RTM to Assigned
Definition: The time from when the patient is clinically ready to move (RTM) to when the bed is assigned.
It should take less than 15 minutes for a patient to be assigned a bed once they are clinically ready to move (RTM).
Occupied Timer
Definition: The time from when the patient is clinically ready to move and has a clean bed assignment to when the patient occupies the bed in TeleTracking.
It should take less than 30 minutes for a patient to occupy a clean bed in TeleTracking once they are clinically ready to move (RTM).
RTM Compliance
Definition: The percentage of time that the patient is designated as ready to move (RTM) prior to assignment.
Patients should be designated as ready to move (RTM) before assignment 95% of the time or greater.
Total ED Hold Time
Definition: The time from the ED bed request to when the patient occupies the bed in TeleTracking.
It should take less than 120 minutes for a patient to occupy the bed from an ED bed request.
% Bed Assignment Conflicts
Definition: The percentage of time that the patient occupies a bed different from what was assigned.
Bed assignment conflicts should occur less than 10% of the time.
Transfer Center Physician Page to Return Call
Definition: The time between the initial physician page and the return call.
It should take 5 minutes for a physician to return a call from the initial page.
Transfer Center Initial Call to Accepted
Definition: The time between the initial call to the Transfer Center until the patient is accepted.
There should be 10 minutes between an initial call to the Transfer Center and when the patient is accepted.
Transfer Center Abandoned Call Rate
Definition: The percentage of calls disconnected prior to access center agent connection.
The percentage of disconnected calls should be less than 8%.
Transfer Center % Canceled or Denied
Definition: The percentage of patients that are either canceled or denied by the Transfer Center.
Less than 10% of patients should be canceled or denied by the Transfer Center.
Transfer Center Call Response Time
Definition: The time from the first ring to connection to an access center agent.
It should take 20 seconds or less to connect to an access center agent.
Throughput
Throughput best practices focus on improving communication among nurses, transporters, and environmental service employees.
RTM to Assigned (Internal Transfers)
Definition: The time from when the patient is clinically ready to move (RTM) to when the bed is assigned.
It should take less than 15 minutes for a patient to be assigned a bed once they are clinically ready to move (RTM).
Occupied Timer (Internal Transfers)
Definition: The time from when the patient is clinically ready to move and has a clean bed assignment to when the patient occupies the bed in TeleTracking.
It should take less than 30 minutes for a patient to occupy a clean bed in TeleTracking once they are clinically ready to move (RTM).
RTM Compliance (Internal Transfers)
Definition: The percentage of time that the patient is designated as ready to move (RTM) prior to assignment.
Patients should be designated as ready to move (RTM) before assignment 95% of the time or greater.
Transport Time on Task Dispatched to Complete
Definition: The time difference between a transport job that has been accepted (dispatched) by the transporter, and when the transporter completes the job.
It should take 20 minutes for a transporter to complete a job once it has been accepted.
Transport Response Time Pending to In-Progress
Definition: There should be less than 15 minutes from when a transport job is created and when it begins.
There should be less than 15 minutes from when a transport job is created and when it begins.
Transport Job Time Pending to Complete
Definition: The time difference between a created job, and when the job is completed.
There should be 30 minutes between a jobs creation and its completion.
% of Transport Dispatches Canceled or Rescheduled After Dispatched
Definition: The percentage of time that transport jobs are cancelled, or rescheduled, after the transporter has accepted the job.
Job cancellations should occur less than 10% of the time.
Delay Time
Definition: The total time that a transporter is delayed throughout the transport job.
Transporters should be delayed by no more than 5 minutes during a job.
Transporter Trips per Hour
Definition: The average number of trips that a transporter completes in one hour.
Transporters should complete 3 trips within one hour.
Discharge
Discharge best practices are used to coordinate, improve, and complete patient discharges.
Pending Discharge Compliance
Definition: The percentage of time that a discharge is preceded by a Pending Discharge.
Discharges should be preceded by a Pending Discharge 70% of the time or greater.
Confirmed Discharge Compliance
Definition: The percentage of time that a discharge is preceded by a Confirmed Discharge.
A Confirmed Discharge should precede a discharge 90% of the time or greater.
Confirmed to Actual Discharge
Definition: The elapsed time between when the Confirmed Discharge is received, to when the patient actually leaves and the bed is made dirty.
There should be less than 120 minutes between when a Confirmed Discharge is received to when the patient actually leaves.
% of Discharges by 11am
Definition: The percentage of patients that were discharged before 11am.
Before or by 11am, 25% of patients should be discharged.
% of Discharges by 2pm
Definition: The percentage of patients that were discharged before 2pm.
Before or by 2pm, 50% of patients should be discharged.
% of Discharges by Transport
Definition: The percentage of patients taken out by transporters.
Patients should be discharged by transporters 70% of the time or greater.
EVS Response Time
Definition: From the time EVS is notified of a dirty bed, to the time EVS marks the bed in progress (cleaning commences).
There should be 30 minutes or less between when EVS is notified of a dirty bed, to when the bed is marked in progress.
EVS Clean Time (Standard Cleans)
Definition: From the time EVS begins cleaning the room (in progress status) to the completion of the room clean (clean status).
There should be 30 minutes or less between when EVS begins cleaning the room to when the room clean is complete.
EVS Turn Time (Standard Cleans)
Definition: From the time EVS is notified of a dirty bed, to the time that the bed is cleaned.
There should be 60 minutes from the time EVS is notified of a dirty bed to when it's cleaned.
Patient Badging Compliance
Definition: The percentage of admitted patients badged with a Location Tracking badge.
Admitted patients should be badged with a Location Tracking badge 95% of the time.
Hardware and Server Configuration
Recommended Browser
Google Chrome is the recommended browser for Data IQ®.
Third-party Software and Components
MiNiFi Java Agent 1.8
Hardware Requirements
MiNiFi should be installed on a virtual server with the specifications below. The agent installs a specific version of Java needed to run Data IQ® properly.
For clients that have Capacity Management Suite® and/or Data IQ®, the virtual server set up for Data IQ® should be separate from the Capacity Management Suite® server.
Minimum requirement: 2 CPU core, 4 GB RAM
Recommended: 4 CPU core, 8 GB RAM
Updating Java on your machine doesn't present a problem for this application. However, if you see the window below after updating Java, do not uninstall any previous versions as this will prevent Data IQ® from functioning properly.
Disk Space Requirements
You will need 100 GB+ of free disk space, which will mostly be used for the initial load.
OS Requirements
Windows (supporting Java 8)
Testing has occurred on Windows Server 2012 R2
Encryption and Authentication
All traffic is encrypted using PKI/TLS encryption. Access control is handled by client X509 certificates, and the certificates are unique to each customer.
Service Account Requirements
To connect to the on-premise SQL Server databases, you use either Windows Authentication or SQL authentication. Windows Authentication is preferred; however, in cases where Windows Authentication is not possible, use SQL Authentication. Details are in the tables below.
Windows Authentication
Account Description | Purpose | Role |
MiNiFi windows service account (domain user) | The account used to run the MiNiFi windows service. | DB Reader on the target databases. |
Installation account | The account used to install the MiNiFi windows service. | Local Administrator permissions.
Permission to create SQL views on the remote databases (DBO) OR
Have access to a SQL login account that has permission to create SQL views on the remote databases (DBO). |
SQL Authentication
Account Description | Purpose | Role |
SQL User account | The SQL Server account used for the MiNiFi windows service to connect to the SQL databases. | DB Reader to the on-premise databases. |
Network Service (local) | The account used to run the MiNiFi windows service. | DB Reader to the on-premise databases. |
Installation account | The account used to install the MiNiFi windows service. | Local Administrator permissions.
Permission to create SQL views on the remote databases (DBO) OR
Have access to a SQL login account that has permission to create SQL views on the remote databases (DBO). |
Whitelisting Data IQ®
DNS or Application Whitelisting
You will be able to whitelist Data IQ® using the domains listed in the table below. Traffic with a Web Proxy Capable value of "YES" may be used with a web proxy to perform content-based filtering rather than IP whitelisting.
AWS IP Whitelisting
Traffic with a Web Proxy Capable value of "NO" will have to be either DNS or IP address whitelisted.
All traffic ingress is fronted by high-performance elastic load balancers. The IP addresses of these load balancers are static, and must be whitelisted. This also means that the URL patterns listed should not run through a network proxy on the way out of the hospital network.
AWS IP Block Whitelisting
Amazon Web Services (AWS)-based services with dynamic IP addressing (such as SnowFlake) cannot be whitelisted by a single IP address. Therefore, the entire IP block for the given AWS region must be whitelisted if DNS whitelisting cannot be used.
AWS publishes its current IP address ranges in JSON format. To view the current ranges, see the .json file. To maintain history, save successive versions of the .json file on your system.
To determine whether there have been changes since the last time that you saved the file, check the publication time in the current file and compare it to the publication time in the last file that you saved.
From | To | IP Addresses | Direction | Protocol | Port | Web Proxy Capable |
Tableau Desktop/Browser | US PROD synapseIQ.teletracking.com *.teletracking.app
EU PROD synapseiq.eu.teletracking.app *.eu.teletracking.app | US PROD 75.2.84.51 99.83.146.66
EU PROD 52.223.10.22 35.71.160.88 | OUT | TCP | 443 | YES |
Notes: Server reports, Dashboards, Publishing Content
This network rule should be trusted at the end user's laptop or desktop.
From | To | IP Addresses | Direction | Port | Web Proxy Capable |
Tableau Desktop | NA | OUT | 443 | YES |
Notes: Database connection
This network rule should be trusted at the end user's laptop or desktop.
From | To | IP Addresses | Direction | Protocol | Port | Web Proxy Capable |
Data Gateway/MiNiFi Server | LEGACY US PROD services.teledev.io *.teledev.io *.*.teledev.io
US PROD ingest.teletracking.app *.teletracking.app
EU PROD ingest.eu.teletracking.app, *.eu.teletracking.app *.*.teletracking.app | LEGACY US PROD 18.206.68.145 54.84.171.142
US PROD 3.210.106.230 35.171.121.159 54.196.75.81
EU PROD 3.126.166.12 3.68.128.3 3.73.60.18 | OUT | TCP MTLS | US PROD 5000 8443
EU PROD 5000 8443 | NO |
Notes: Data ingest
*.*.teledev.io may be needed when using a Palo firewall. Otherwise, *.teledev.io should suffice.
From | To | IP Addresses | Direction | Protocol | Port | Web Proxy Capable |
Data Gateway/MiNiFi Server | Customer database server |
| OUT | TCP | <SQL Data Port> <br> 1433 generally | NO |
Notes: Query data from Capacity IQ®/Transfer IQ® applications
Provision Users
Users with the Admin role set up and manage user accounts. The process for provisioning users is described in the following sections. The method by which Data IQ® users are provisioned is determined by (1) your health system’s current licensing for the Operations IQ® Platform and (2) the identity provider you will use. The following sections explain the options.
Understand your Current Setup
Operations IQ® Platform Clients
If you are currently using the Operations IQ® Platform and are provisioning users for Data IQ®, you will likely use your existing user setup process to provision these users. Your TeleTracking representative can provide guidance.
Determine Identity Provider
You set up and manage users through either your health system’s Active Directory or TeleTracking's Active Directory.
Your Health System’s Active Directory
If you plan to use or are currently using your health system’s Active Directory (AD) to manage users, the first step to provisioning users is adding security groups to your AD. The security groups map to reciprocal user roles in Data IQ®.
Report_Publishers_Group
Report_Editors_Group
Report_Viewers_Group
TeleTracking's Active Directory
If you plan to use or are using TeleTracking’s Active Directory, you can use either the Data Import option or the User option on the Admin menu in the TeleTracking IQ® platform. To determine which option is best for you, work with your TeleTracking representative.
Plan User Roles
User role determines the levels of permissions allowed for a user, including whether a user can publish, interact with, or only view content published to the server. You assign one or more roles to each user either in your active directory or when creating the user account in the Operations IQ® Platform.
Role | Description |
Report Publisher | Can create and publish new workbooks and data sources. |
Report Editor | Can view workbooks, interact with views, and edit/save customized views. |
Report Viewer | Can sign in, see, and filter published views. |
Provision Users
To provision Data IQ® users in your Active Directory, follow these steps:
In your Active Directory, add the security groups noted in Your Health System’s Active Directory.
Add each Data IQ® user to the appropriate security group.
Instruct users to sign on to the platform to activate their account.
Provide each user the URL to the Operations IQ® Platform and the username and password required for sign in.
Edit User Role
You can edit a user's role at any time. Do one of the following:
To edit a user's role in your Active Directory, add or remove the appropriate Data IQ® security group from the user. The next time the user signs in, the role change will be in effect.
The next time the user signs in, the role change will be in place.
Manage Passwords
You manage expiring and resetting passwords in your Active Directory.
User Roles
User role determines the levels of permissions allowed for a user, including whether a user can publish, interact with, or only view content.
Report Viewers
Report Viewers are typically consumers of the information on a dashboard and use the information to understand the health of the hospital, drive decisions, or monitor key indicators such as completed placements or averages for ready-to-move to occupied. Report viewers have access to the following options when looking at dashboard:
Add comments
Share links
Subscribe to updates to a view
Save favorite views
Download a view
Report viewers cannot create a new workbook and will not see the option.
Report Editors
Report editors have the same permissions as a report viewer, plus these:
Edit an existing published workbook and add worksheets for views, dashboards, and stories.
Create and edit a new workbook based on a published data source.
Connect to different published data sources while editing.
Report Publishers
Report publishers have the same permissions as a report editor, plus the ability to create new workbooks using an existing data source.
Dashboards
About
Dashboards are a collection of several views (also referred to as reports or charts) that enable you to compare a variety of data simultaneously. Data IQ® offers two types of dashboards: interactive, which are intended for desktop use, and enterprise, which are intended for a Command Center or huddle wall.
Dashboards that track performance, such as Environmental Services and Transport, include ...
Interactive (Desktop) Reports
Interactive reports present historical data in aggregate form. Historical data can range from five minutes ago to several months ago. Users that want to go deeper into more specific layers of the data or information being analyzed can drill down by hovering and/or clicking an area of the report.
Enterprise Dashboards
Intended to display on large, wall-mounted monitors in your Command Center or other key visible areas. Enterprise dashboards show similar information as interactive reports except the information shown is for the current day. Because they are intended for viewing, enterprise dashboards do not have filters and do not offer drill down features. Another key difference between enterprise and interactive dashboards is the color of the dashboard. Because enterprise dashboards are intended for display in corridors or Command Centers, the dark background and the contrast colors enhance the readability of the information making it easier for viewers to analyze and act.
Thresholds
Thresholds are defined by TeleTracking's Best Practices and are used on interactive and desktop dashboards to help you track operational performance. The icons described below provide a visual snapshot of performance.
You can customize threshold values to suit your health system's operations.
Command Center Dashboards
Executive Overview
This dashboard replaces our Executive App and recreates its functionality in Data IQ®. This dashboard is designed specifically for a mobile device and it’s recommended to view using the Tableau Mobile App. The dashboard can also be viewed using a web browser. To view the Mobile view on a desktop, select the Device Layouts button in the top right.
This dashboard displays the Enterprise Census, Census by Campus, Census by Unit Category, and Emergency Department Metrics- Patients Waiting in ED, Current Longest Wait in ED, and Current Avg Wait Time in ED.
Data Source: Census Current + Current Patient Placement
Current EVS Employee Status
This dashboard shows the current status of EVS employees. The dashboard displays the number of employees in each status, the number of employees in each status by campus, and the number of idle employees sorted by longest time being idle (meaning the employee at the top of the list has been idle for the longest amount of time).
Data Source: Current BedTracking Employee Status
This data source should only be used with Capacity IQ® or Capacity Management Suite® version 2021.1 or earlier.
Current Transport Employee Status
This dashboard shows the current status of Transport employees. The dashboard displays the number of employees in each status, the number of employees in each status by campus, and the number of idle employees sorted by longest time being idle (meaning the employee at the top of the list has been idle for the longest amount of time).
Data Source: Current TransportTracking Employee Status
This data source should only be used with Capacity IQ® or Capacity Management Suite® version 2021.1 or earlier.
Transfer Center Case
This dashboard displays the activity of a single transfer center and indicates durations, percentages, and values that fall within or outside performance goals. The top sets of numbers are enterprise-wide values while the bottom sets of numbers are for facilities, campuses, and service lines.
Data Source: IQ Case
Facility Case Communication Transfer IQ®
This dashboard displays the durations of activities within facilities and provides reasoning for the declines of patients within singular facilities.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: IQ Facility Case + IQ Case
Physician Case Communication
This dashboard displays the average response times by physicians across the enterprise with emphasis on the response time and volume of declines created by physicians.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: IQ Physician Communication + Transfer Center
Enterprise Census
This dashboard allows you to track bed availability and capacity with the consideration of additional census metrics.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Census Current
Transport Performance
This dashboard displays the average duration of Transports and indicates durations that fall within or outside performance goals.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Current Transport Tracking
EVS Performance
This dashboard displays the average duration of EVS jobs and indicates durations that fall within or outside performance goals. The first set of numbers along Average Response Time, Average Clean Time, and Average Turn Time represent enterprise-wide durations.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: EVS Streaming
Enterprise Scorecard
This dashboard compares enterprise-wide durations, variables, and percentages against the prior month.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Sources: EVS Streaming, Census Current, IQ Case Patient Visit, and Current Transport Tracking
EVS Employee Performance
This dashboard breaks down environmental service metrics by employee to illustrate who is or is not meeting the thresholds of their position.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: BTJOBS
Facility Communication
This dashboard is designed to quantify the individual facility interactions that occur with every transfer case. Seeing these interactions can help monitor performance and accountability measures for your facilities.
Data Source: Transfer Center Facility Communications Detail
Patient Placement
This dashboard displays the average duration of patient placement in multiple fields and indicates durations that fall within or outside performance goals. In addition, the numerical value of patient statuses is tracked.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Patient Placement
Patient Placement Pipeline
This dashboard displays the key time intervals from when a bed becomes empty to when the next patient occupies the bed. The dashboard shows an average interval figure for all placements that have been completed. The pipeline highlights which parts of the process meet the best practice goals and those parts which might be causing delays to patient flow.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Sources: Current Patient Placement, Current Patient Visit, Current Transport Tracking, and EVS Streaming
Patient Visit
This dashboard displays the progress of discharges across an enterprise. The top set of numbers represents enterprise-wide durations, values, and percentages. The lower part of the dashboard shows discharge percentages by facility.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Patient Visit
Note that the Discipline dimension for this data source is derived from the patient's current or last unit.
Enterprise Observation Patients
This dashboard shows the current observation patients in real time.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Current Patient Visit
Transfer Center Opportunity
This dashboard compares the activity of different services from the current month to the prior month through numeric values and percent changes.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Sources: Transfer Center + Current Transport Tracking
Transfer Center Map
This dashboard displays the rate and progress of transfers per facility within an enterprise along with their geographic location on a topographic map.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Sources: IQ Case + Transfer IQ® Enterprise
Transport Employee Performance
This dashboard breaks down transporter metrics by employee to help illustrate who is or is not meeting the thresholds of their position.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Source: Current Transport Tracking
Capacity, Census, and Epidemiology
This dashboard displays key metrics such as ICU capacity and confirmed COVID-19 patients to help health systems combat COVID-19. The latest version of this dashboard also distinguishes between COVID-19 patients and all other patients and identifies patients by gender.
Data Sources: Census Current + Census Snapshot + Patient Visit
Users must filter out any isolation type that is either “None” or “Standard” for the total number of isolations (which appear at the top of the dashboard) to display accurate counts.
Blocked Beds
This dashboard displays the number of currently blocked beds by reason and location. This dashboard is connected to the new fast lane data source, Current Location Action Status. The currently blocked beds are defined as those that have an Action of Blocked, and an End Status Timestamp that is Null (meaning the blocked action is still active). By default, this dashboard shows all currently blocked beds by campus. This can be filtered to specific campuses using the filter menu in the top-left corner. This can also be further broken out to show by Unit or by Location by hovering over the “Campus” header until a +/- button appears.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Data Sources: Current Location Action Status + Location Action Status
Interactive Dashboards
Backdated Discharges
Backdated Discharge Time is the time from the documented time of patient discharge (what is manually entered into the ADT) to the time it was actually entered and TeleTracking was notified of that patient's departure. By default, the Backdated Discharges interactive report shows only those with durations longer than 30 minutes but can be changed by using the parameter at the top of the report. Clicking on a bar in the Backdated Discharge Volume chart will filter the table below. The Backdated Discharge report is built from data in the Patient Visit data source.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Blocked Beds Interactive Report
The Historical Blocked Beds interactive report contains information for any bed that was ever blocked during the selected date range. The report shows the total number of blocked beds by both the reason and the bed location. By default, this report shows the last full month of data but can be changed using the menu button in the top-left corner. Similar to the command center version, this dashboard can easily change the level of aggregation by hovering over the campus header until the +/- button appears.
Care Progression Indicators
The Care Progression Indicators interactive report displays data relating to the Care Progression Indicators contained within the Capacity IQ® solution. The interactive report displays the care progression indicators by status, grouping, and duration.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Census Historical Trends
The Census Historical Trends interactive report allows users to see their census percent over time. A unique calendar view makes it easy to follow the trends throughout the calendar year. This report also breaks out census by campus, month, day of the week, and hour of the day. Users can switch between physical and staffed census by using the parameter in the menu.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Discharge Milestones
The Discharge Milestones interactive report displays data relating to the Discharge Milestones functionality contained within the Capacity IQ® solution. The interactive report displays the discharge milestones by status, duration, and location. It also lists the number of delays by location and reason.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Enterprise Placement
This interactive dashboard displays the average duration of bed requests along with the placement of patients. Except for RTM Compliance by Campus/Unit, the dashboard’s numbers represent enterprise-wide durations and values. The data is pulled from the Patient Placement flat table. You can filter the data by Date, Placement Type, and/or Campus.
The following sections describe the views that are available in the dashboard.
Completed Placements by Unit Category
What is it?
The average number of completed placements displayed by Origin Unit Category and Origin Unit.
Source of data
Patient Placement statistics. Data points included in the view are:
Origin Unit
Origin Unit Category
Number of Records
Filters applied to view
Admit Type =/= ‘Null’
Placement Status = Completed
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use to monitor where placements are coming from.
RTM Compliance by Campus
What is it?
The percentage of placements where RTM Time is before Bed-Assign Time.
Source of data
Patient Placement statistics. Data points included in the view are:
Origin Campus
Origin Unit Category
RTM Compliance %
IF sum( IF [Placement Status]="Completed" THEN [RTM before assigned count] END)/SUM([Completed Placements])
IF DATEDIFF('minute', [Rtm Timestamp],[Bed Assigned Timestamp])>0 THEN 1 ELSE 0 END
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use to elevate the number of placements per campus.
Avg. ED Request to Occupied Time
What is it?
The average time (in minutes) from when a bed was requested to when the bed was assigned, shown by the hour.
Source of data
Patient Placement statistics. Data points included in the view are:
ED bedrequest To Bedoccupy Time - Average (IF [Origin Unit Category]="ED" THEN [Bedrequested To Bedoccupied Time] END)
BedRequest_Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average ED Request to bed occupied time less than 15 minutes.
Avg. RTM to Assigned Time
What is it?
The average time (in minutes) from when a patient was Ready To Move (RTM) to when the bed was assigned, shown by the hour.
Source of data
Patient Placement statistics. Data points included in the view are:
RTM To Bedassigned Time - Average
BedRequest_Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommends an average RTM to assigned time of less than 15 minutes.
Avg. Request to Occupied Time
What is it?
The average time (in minutes) from when a bed is requested to when the bed is finally occupied, shown by the hour.
Source of data
Patient Placement statistics. Data points included in the view are:
BedRequested_to_BedOccupied_Time - Average
BedRequest_Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE END
How to interpret
TeleTracking’s Best Practices recommend an average bed request to bed occupied time of less than 30 minutes.
Avg. RTM to Occupied Time
What is it?
The average time (in minutes) from when a patient is Read to Move (RTM) to when the bed is finally occupied, shown by the hour.
Source of data
Patient Placement statistics. Data points included in the view are:
RTM_to_BedOccupied_Time - Average
BedRequest_Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Bedrequest Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Bedrequest Timestamp >= Start Date AND Bedrequest Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommends an average RTM to bed occupied time of less than 30 minutes.
Environmental Services
This interactive dashboard displays the progress of environmental services across an enterprise. The data is pulled from the BTJobs flat table. You can filter the data by Date, Campus, and/or Adjusted Clean Flag.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The following sections describe the views that are available in the dashboard.
Adjusted Cleans
What is it?
When the Adjusted Cleans Flag checkbox is selected, the cleaned beds included are only those that fall within the thresholds set by Total_InProgress_Time OR Response_Time. The percentage is the number of beds that have been cleaned vs all other beds.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Count Adjust Cleans (yes)
SUM(IF [Adjusted Cleans Flag]="Y" THEN [Number of Records] END)
% yes adjust cleans
SUM(IF [Adjusted Cleans Flag]="Y" THEN [Number of Records] END)/TOTAL(SUM([Number of Records]))
Filter applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use to monitor the number of open and clean beds versus all other beds in the network.
Adjusted Cleans (%)
What is it?
When the Adjusted Cleans Flag checkbox is selected, the cleaned beds included are only those that fall within the thresholds set by Total_InProgress_Time OR Response_Time. The percentage is the number of beds that have been cleaned versus all other beds.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Count Adjust Cleans (yes)
SUM(IF [Adjusted Cleans Flag]="Y" THEN [Number of Records]END)
% yes adjust cleans
SUM(IF [Adjusted Cleans Flag]="Y" THEN [Number of Records]END/TOTAL(SUM([Number of Records]))
Filter applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use to monitor the number of open and clean beds versus all other beds in the network
Found Beds
What is it?
The number of beds that have been found dirty without a requested job.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Found Beds
IF [Job Create Reason Type] = "Found Bed" THEN [Number of Records]
ELSE 0
END
Filter applied to view
Data Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END END
How to interpret
Use to monitor how many beds are being found without a requested clean job.
Total vs Completed Jobs
What is it?
The amount of Total Requested Jobs vs the Completed Jobs displayed by Campus.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Campus
Unit
Clean beds (IF Job Status Type = “Complete” THEN Number of Records ELSE 0 END)
Filter applied to view
Data Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
Job Status =/= Canceled
How to interpret
This should be used as a gauge to the productivity of each campus. The productivity is the number of clean beds divided by the number of requested jobs.
Avg. Response Time
What is it?
Response time is number of minutes from when a bed cleaning job was created until it is put to In Progress. This view shows the average response time between the specified start and end dates.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Response Time - Average
Created Timestamp - Hour
Filter applied to view
Avg(Response Time) =/='Null'
Date Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend a Response Time of under 30 minutes.
Avg. Clean Time
What is it?
The average time in minutes that it took for EVS to clean a bed from in-progress status to clean status.
Source of data
Bed cleaning job statistics. Data points in the view are:
Total Inprogress Time - Average
Created Timestamp - Hour
Filter applied to view
Avg(Total Inprogress Time) =/='Null'
Date Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend a Clean Time of under 30 minutes.
Avg. Turn Time
What is it?
The average time from when a bed cleaning job was created until it was completed.
Source of data
Bed cleaning job statistics. Data points included in the view are:
Overall Turn Time - Average
Created Timestamp - Hour
Filter applied to view
Avg(Overall Turn Time) =/='Null'
Date Selector = 1
IF Date Type = “Today” THEN
IF Created Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Created Timestamp >= Start Date AND Created Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend a Turn Time of under 60 minutes.
EVS Employee Performance
This dashboard breaks down environmental service metrics by employee to illustrate who is or is not meeting the specified thresholds. The data is pulled from the BT Jobs flat table. You can filter the data by Date and Employee.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Facility Case Communication
This dashboard is designed to quantify the individual facility interactions that occur with every transfer case. Seeing these interactions can help monitor performance and accountability measures for your facilities. You can filter the data by Date Type, Start / End Date, and Facility Name.
Facility Communication
This on-premise transfer center interactive dashboard displays the durations of activities within facilities and provides reasoning to the declines of patients by facility. The data is pulled from the Transfer Center Facility Communication Detail flat table. You can filter the data by Start / End Date, Referring Facility, Destination Unit, and/or Level of Care.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Length of Stay
This interactive dashboard displays length of stay, discharge, and census metrics calculated from the Patient Visit data source. Length of Stay is based on the amount of time between the Admit Timestamp and the Actual Discharge Timestamp. This dashboard also displays metrics for all hospital patients as well as ICU patients specifically. Any patient with a unit category name of "critical care" will be considered an ICU patient.
The data is pulled from the Patient Visit data source. You can filter the data by Campus and Date Range.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Observation Patients
The Observation Patients interactive report shows all observation patients for the past two weeks, including patients who have been discharged. This allows you to monitor observation patients by location and observation duration.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Opportunity
This interactive dashboard compares the activity of different services from the current month to the prior month through numeric values and percent changes. The data is pulled from the Transfer IQ® and Transport Tracking flat tables. You can filter the data by Hospital Service, Referring Facility, and/or Referring Unit.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Patient Visit
This interactive dashboard displays the progress of discharges. The data is pulled from the Patient Visit data source. Note that the Discipline dimension for this data source is derived from the patient's current or last unit. You can filter the data by Date, Campus, and/or Unit.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Performance Trending Scorecard
The Performance Trending Scorecard is designed to help users look at the key Capacity IQ® metrics needed to improve patient flow all in one place. Each metric is a key indicator of where the patient flow opportunities and successes are and can help users understand what processes to focus on. Additionally, users can see these metrics over different time spans to help narrow or broaden the view of performance.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The Performance Trending Scorecard displays Full Historical data from Placement (broken out into multiple areas), Patient Visit, Transport, and EVS Standard Lane data sources. By default, the dashboard displays the data by month for the past 12 months. Daily, weekly, and monthly view options are provided to help users better understand which direction their patient flow efforts are heading.
To change these date aggregations and refine which campuses/units to include, simply click the filter menu in the top-left corner of the screen. Similar reports in Capacity IQ® were heavily filtered to include only certain campuses/units, so we recommend looking at all of the available filters to remove any unwanted units:
This scorecard can be further configured to modify the target metric values. For example, if your health enterprise has a different target metric for an EVS response time, click Show Target Values in the top-right corner of the screen. Then, type in the desired value to change the target, which will change the color calculations to match.
Physician Communication
This interactive dashboard displays the average duration of response times by physicians with emphasis on the response time and volume of declines created by physicians. The data is pulled from the Transfer Center and Transfer Center Consult flat tables. You can filter the data by Start / End Date, Campus Name, and/or Referring Facility.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Predictive Admissions
The Predictive Admissions interactive report leverages Tableau's built-in forecasting feature to predict campus admissions for the next 2 weeks. By default, the data is displayed for all units that have a full 90 days' worth of data. The blue line shows the actual admits, while the green line displays the predictions, as well as the confidence interval in the lighter shaded areas.
The tables on the right show a history of past forecasts. In this example below, we can see what the estimates were for each campus, as well as the actual admissions.
This report is not meant to predict admissions with 100% accuracy. During testing, the forecast accuracy still performed closer to 90% accuracy. The forecast simply looks at Full Historical data in order to create a trend for the next 2 weeks. These are the full forecast options:
If there is not enough Full Historical data to create a forecast, the report will show a "No forecast" tag on the chart. The example below shows a campus with partial data that breaks the forecast. We've added a filter to only show units with full data. To see which units are filtered out with partial data, hover over the "Missing Units" text in the top right of the screen.
For more information on how the forecasting feature works, please review this Tableau white paper.
Transfer IQ® Issues and Notes
This interactive dashboard allows users to view notes attached to Transfer IQ® cases and recreate previous History reports. The data is pulled from the IQ Notes, IQ Issues, and IQ Issue Notes data sources.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transfer Case Volume Analysis Interactive Report
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The Transfer Case Volume Analysis report is a new Interactive report that allows you to easily see the detailed case volume information for a given date range. This report connects to the IQ_Case Standard Lane data source and allows for a Full Historical picture of Transfer IQ® cases. Clicking on any bar in one of the charts will filter all of the other charts in the report. The report breaks out the total cases by a number of metrics, including:
Disposition
Disposition Reason
Referring Facility
Referring Unit
Destination Facility
Preferred Facility
Case Type
Service Line
Transfer Reason
Patient Type
Payor Category
Users can also choose to look at this data in a row-by-row tabular view. Clicking the “Open Table” text at the top of the dashboard will bring the user to this tabular view:
Filters can also be applied on the tabular view by clicking the menu icon in the top-left corner.
Transfer Case Cancels
This interactive dashboard displays the overall case cancellations by campus and service line and shows trends in cancellations from previous years. The data is pulled from the Transfer IQ® flat tables (IQ Case or Transfer IQ®). You can filter the data by Date, Destination Facility, EMC Status, and/or Referring Unit.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transfer Case Declines
This interactive dashboard displays the overall case declines by campus and service line and trends in declines from previous years. The data is pulled from the Transfer IQ® flat tables (IQ Case or Transfer IQ®). You can filter the data by Date, Destination Facility, EMC Status, and/or Referring Unit.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The following sections describe the views that are available in the dashboard.
Decline Volume
What is it?
The total amount of declined dispositions.
Source of data
The Transfer Center Data point included in the view is the SUM(Number of Records).
Filters applied to view
Case Disposition = Declined
How to interpret
Use to monitor overall declined dispositions.
Declines by Campus
What is it?
The amount of declined dispositions displayed over time by the Campus Name.
Source of data
The Transfer Center Data points included in the view are:
Campus Name
SUM(Number of Records)
Filters applied to view
Case Disposition = Declined
How to interpret
Use to monitor which campuses are declining the most cases.
Decline Volume Trending
What is it?
The amount of cancelled dispositions displayed over time by the month and year of Case Created Date.
Source of data
Transfer Center Data points included in the view are:
Case Create Date – Month and year
SUM(Number of Records)
Filters applied to view
Case Created Date = Last 3 Years
Case Disposition = Declined
How to interpret
User to monitor the volume of declined dispositions over time.
Declines by Service Line
What is it?
The amount of declined dispositions displayed over time by the Service Line.
Source of data
Transfer Center Data points included in the view are:
Service Line
SUM(Number of Records)
SUM(Contribution Margin)
Where [Number of Records * Contribution Margin Parameter]
Filters applied to view
Case Disposition = Declined
How to interpret
Use to monitor which service lines have the highest amount of declined dispositions.
Declines by Reason
What is it?
The amount of declined dispositions displayed over time by the Case Disposition Reason.
Source of data
Transfer Center Data points included in the view are:
Case Disposition Reason
SUM(Number of Records)
Filters applied to view
Case Disposition = Declined
How to interpret
Use to monitor the top reasons that cases are being declined.
Transfer Case Inbound Requests
This interactive dashboard displays the overall transfer case inbound requests by campus, service line, and trending volumes month over month. The data is pulled from the Transfer IQ® flat tables (IQ Case or Transfer IQ®). You can filter the data by Date, Destination Facility, EMC Status, and/or Disposition.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The following sections describe the views that are available in the dashboard.
Inbound Requests
What is it?
The total amount of inbound transfer requests.
Source of data
The Transfer Center Data point included in the view is the SUM(Number of Records).
Filters applied to view
None
How to interpret
Use to monitor overall inbound transfer requests.
Enterprise Volume
What is it?
The amount of inbound requests displayed by the Destination Facility.
Source of data
The Transfer Center Data points included in the view are:
Destination Facility
SUM(Number of Records)
Filters applied to view
None
How to interpret
Use to monitor which facilities are receiving the most inbound transfer requests.
Request Volume Trending
What is it?
The amount of inbound transfer requests displayed over time by the month and year of Case Entry Timestamp.
Source of data
Transfer Center Data points included in the view are:
Case Entry Timestamp– Month and year
SUM(Number of Records)
Filters applied to view
Case Entry Timestamp = Last 3 Years
How to interpret
User to monitor the volume of inbound requests over time.
Volume by Service Line
What is it?
The amount of inbound requests displayed over time by the Hospital Service.
Source of data
Transfer Center Data points included in the view are:
Hospital Service
SUM(Number of Records)
SUM(Contribution Margin) (Where [Number of Records * Contribution Margin Parameter])
Filters applied to view
None
How to interpret
Use to monitor which service lines have the highest amount of inbound requests.
Volume by Referring Facility
What is it?
The amount of cancelled dispositions displayed over time by the Referring Facility. Can show all campuses or lets the user see the Top N (selectable parameter) campuses compared to all others.
Source of data
Transfer Center Data points included in the view are:
Referring Facility
SUM(Number of Records)
Filters applied to view
None
How to interpret
Use to monitor which facilities are receiving the most inbound transfer requests.
Transfer Case Outbound Requests
This interactive dashboard displays the overall transfer case outbound requests by campus, service line, and trending volumes month over month. The data is pulled from the Transfer IQ® flat tables (IQ Case or Transfer IQ®). You can filter the data by Date, Referring Facility, and/or EMC Status.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The following sections describe the views that are available in the dashboard.
Outbound Requests
What is it?
The total amount of outbound transfer requests.
Source of data
The Transfer Center Data point included in the view is the SUM(Number of Records).
Filters applied to view
Case Disposition = Accepted
How to interpret
Use to monitor overall outbound transfer requests.
Volume by Transfer Reason
What is it?
The amount of outbound requests displayed over time by the Transfer Reason.
Source of data
The Transfer Center Data points included in the view are:
Transfer Reason
SUM(Number of Records)
Filters applied to view
Case Disposition = Accepted
How to interpret
Use to monitor the top transfer reasons for outbound requests.
Request Volume Trending
What is it?
The amount of outbound transfer requests displayed over time by the month and year of Case Entry Timestamp.
Source of data
Transfer Center Data points included in the view are:
Case Entry Timestamp– Month and year
SUM(Number of Records)
Filters applied to view
Case Entry Timestamp = Last 3 Years
Case Disposition = Accepted
How to interpret
User to monitor the volume of outbound requests over time.
Volume by Service Line
What is it?
The amount of outbound requests displayed over time by the Service Line.
Source of data
Transfer Center Data points included in the view are:
Hospital Service
SUM(Number of Records)
SUM(Contribution Margin) (Where [Number of Records * Contribution Margin Parameter])
Filters applied to view
Case Disposition = Accepted
How to interpret
Use to monitor which service lines have the highest amount of outbound requests.
Volume by Destination
What is it?
The amount of cancelled dispositions displayed over time by the Referring Facility. Can show all campuses or lets the user see the Top N (selectable parameter) campuses compared to all others.
Source of data
Transfer Center Data points included in the view are:
Referring Facility
SUM(Number of Records)
Filters applied to view
Case Disposition = Accepted
How to interpret
Use to monitor which campuses are receiving the most inbound transfer requests.
Transfer Center Case Teams
This Full Historical report displays information related to the case team assigned to transfer cases and a detailed breakdown of transfer cases.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transfer Center History
Shows transfers on a given year, month, and/or day. The data is pulled from the Transfer Case OR IQ Case data sources. You can filter the data by Month, Year, Campus Name, and/or Case Disposition.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transfer IQ® Case Breakout
The Transfer Center Case Breakout interactive report displays trending information for all Transfer IQ® cases over a given period of time. By default, the report shows the cases broken out by week and disposition. Each section also displays the cases by In
Network and Out of Network. Each cell is compared to the previous value with both the text color and arrows.
The Transfer Center Case Breakout report is connected to the IQ_Case_Enterprise and Case_Physician_Communication Standard Lane data sources. This report is also highly configurable, with multiple filters for each section to ensure the most accurate data is being displayed. Please review the filter options when initially setting up the report:
The Lost Business section at the bottom displays the number of cases and a percentage calculated as (Cancelled + Declined Cases)/(All Cases – Consults – Other).
Transfer Milestones
The Current Transfer Milestones fast lane data source and Interactive Report allows users to view Transfer Milestone data from the Capacity IQ® solution. The interactive report gives a detailed table view of the Transfer Milestones, focusing on Transfer Assign Time, Transfer Request Time, Patient Transfer Time, Patient Expected Discharge Time, and Actual Patient Discharge Time. The indicator in the rightmost column shows if the patient was discharged within 24 hours of the transfer taking place.
TransferCenter Case Breakout
The Transfer Center Case Breakout interactive report displays trending information for all On Premise Transfer cases over a given period of time. This report differs slightly from the Transfer IQ® version that displays the cases by in and out of network. Users can still see the cases by in network and out of network by using the Destination Health System in the filter menu. By default, the report shows the cases broken out by week and disposition. Each cell is compared to the previous value with both the text color and arrows. This report is connected to the Transfer_Center_Consult Standard Lane data source.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transfers
Displays case volumes by location, disposition, case type, service line, and destination/referring facilities. Clicking on any of the bars will filter the map to those specific locations. This interactive report is connected to the IQ_Case Standard Lane data source.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transport Employee Performance
This dashboard breaks down transporter metrics by employee to help illustrate who is or is not meeting the thresholds of their position. The data is pulled from the Transport Tracking Jobs flat table. You can filter the data by Date and metrics Completed by User.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Transport Services
This interactive dashboard displays the average duration of Transport requests along with the time taken by cancels and delays after dispatch. The charts on the right side of the dashboard show enterprise-wide durations. The data is pulled from the Transport Tracking flat table. You can filter the data by Date, Requesting Campus, and/or Destination Campus.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
The following sections describe the views that are available in the dashboard.
Delays by Campus
What is it?
The average delay time shown by Origin Campus and Origin Unit.
Data source
Transport job statistics. Data points included in the view are:
Origin Campus
Origin Unit
Total Delay Time
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use this to gauge which campuses have the highest delay times.
Cancels After Dispatch
What is it?
The average time (in minutes) that it took from Dispatched to Cancelled for different Job Types, displayed by Cancelled Reason. Color also indicated the volume of cancels for each reason.
Data source
Transport job statistics. Data points in the view are:
Dispatched to Cancelled Time Average
TTJob Type
Canceled Reason
Filters applied to view
Date Selector = 1
Dispatched To Canceled Time =/= Null
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
Use to monitor the top cancel reasons for transport requests.
Avg. Pending to InProgress
What is it?
The average time (in minutes) that it took from when a job is requested to when the job is in progress. The line chart shows this by the Hour of Completed Timestamp.
Data source
Transport job statistics. Data points included in the view are:
Request to Inprogress Time - Average (DATEDIFF('minute',[Createddate],[Inprogress Timestamp]))
Completed Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average Request to Inprogress time be less than 15 minutes.
Avg. Pending to Dispatch
What is it?
The average time (in minutes) that it took from Pending to Dispatch Time. The line chart shows this by the Hour of Completed Timestamp. This is the time it takes from a pending transfer request until it is in progress.
Data source
Transport job statistics. Data points included in the view are:
Pending to Dispatch Time - Average
Completed Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average Pending to Dispatch Job completion time of less than 5 minutes.
Avg. Dispatch to Complete
What is it?
The average time (in minutes) to progress a task from Dispatch to Completed. The line chart shows the Hour of Completed Timestamp., which is the time it takes from when a task is dispatched until it is complete.
Data source
Transport job statistics. Data points included in the view are:
Dispatch to Completed Time - Average
Completed Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average Dispatch to Completed be less than 20 minutes.
Avg. InProgress to Complete
What is it?
The average time (in minutes) to progress a task from Inprogress to Completed. The line chart shows the Hour of Completed Timestamp., which is the time it takes from when a task is dispatched[TL1] until it is complete.
Data source
Transport job statistics. Data points included in the view are:
Inprogress to Completed Time - Average
Completed Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average Time on Task be less than 10 minutes.
Avg. Pending to Complete
What is it?
The average time (in minutes) that it took to complete a task from Pending to Completed Time. The line chart shows this by the Hour of Completed Timestamp. This is the time it takes from when a task is pending until it is complete.
Data source
Transport job statistics. These are the data points included:
Pending to Completed Time - Average
Completed Timestamp - Hour
Filters applied to view
Date Selector = 1
IF Date Type = “Today” THEN
IF Completed Timestamp = Today() THEN 1 ELSE 0 END
ELSEIF Date Type = “Custom Date” THEN
IF Completed Timestamp >= Start Date AND Completed Timestamp >= End Date THEN 1 ELSE 0 END
How to interpret
TeleTracking’s Best Practices recommend the average Time on Task be less than 30 minutes.
Volume Analysis
The Volume Analysis interactive report enables you to review summary information captured every hour by Capacity IQ®. This data shows the total number of admits, transfers, confirmed discharges, and other events that occurred during a specified timeframe. To see a list of all available fields in the Volume Analysis data source, please view the recently updated Data Sources tab.
More details are provided here. You must be authenticated via the platform url to access this confidential information.
Subscribe to a dashboard
You can subscribe to a dashboard to receive recurring updates on data from a particular view or entire workbook.
For example, if you wanted to review performance metrics every morning at 8am, you could create a custom email subscription which would forward the previous day's metrics to your inbox at the scheduled time.
Before setting up your subscription, ensure that your dashboard or report is set to the correct view.
Only interactive reports are available for subscription.
Click Subscribe from the toolbar.
Open the Schedule drop-down and specify the time, frequency, and days that you want to receive updates on a report.
Click the Time Zone link at the bottom of the dialog to change time zones.
Click Subscribe.
Once you've set up your subscription, you can manage it (and any other subscriptions based on the same workbook) from the Subscriptions tab in your workbook.
Customize Parameter Thresholds
On certain dashboards, thresholds can be changed without having to create new custom calculations. This allows you to set thresholds such as Response Time or Clean Time to your own hospital's standards.
The following dashboards have adjustable parameter thresholds:
Interactive Dashboards
Environmental Services
EVS Employee Performance
Transport Employee Performance
Transport Services
Enterprise Placement
Command Center Dashboards
EVS Performance
EVS Employee Performance
Transport Employee Performance
Enterprise Transport Performance
Patient Placement
Customizing parameters on an Interactive Dashboard
Interactive dashboards contain a text box that allows you to change a threshold by typing in a new metric. Once you change the default threshold metric, the visualization will automatically be updated.
Customizing parameters on a Command Center Dashboard
The thresholds on a Command Center dashboard can be adjusted in Edit mode.
Navigate to a dashboard and click Edit in the upper right-hand corner.
From the Sheets section, hover over any sheet and click the Go to sheet button.
This will populate a Dimensions, Measures, and Parameters section in the Data tab.
Under the Parameters section, hover over the parameter you want to adjust and open the drop-down menu. Then select Show Parameter Control.
A text box will appear on the right side of your screen that will allow you to type in any value for the threshold you've selected.
Enter a new value for the selected threshold.
Once you return to the dashboard, any metrics affected by the threshold you've edited will adjust to present data based on your new threshold.
How to Edit Tooltips
Dashboard tooltips can be customized to display only the information most relevant to your health system.
Click Edit from the toolbar of a selected dashboard.
Go to the sheet containing the tooltip you want to edit.
Click Tooltip under the Marks column.
Edit the content inside the tooltip from the Edit Tooltip window.
From here, you'll have the option to format the content, insert calculations or parameters, add dimensions or measures, etc.
Format content (font, font color, etc.).
Insert metrics such as calculations or parameters.
Add or remove dimensions or measures.
Click OK and then Save to keep your changes.
Create Parameters in Web
Creating a parameter in web allows users to filter between data sources that are not linked.
For instance, a single Campus parameter could be created that's used to drive the campus field for each of the data sources associated with that dashboard.
Click Edit from the toolbar of a selected dashboard.
Open the drop-down menu next to Dimensions and select Create Parameter.
Fill in the properties and values based on the type of parameter you want to create. Then click OK.
Once a parameter has been successfully created, you can create calculations that include the new parameter in each of the data sources you want to filter between.
Standard Content Load Times
The tables below outline the average load times for the standard content Interactive and Command Center dashboards available in Data IQ®.
In the Operational Solution column anything for Capacity IQ® is also applicable for Capacity Management Suite® - October 4, 2024
Command Center Dashboards
Operational Solution | Dashboard Name | Load Time (Seconds) | Multiple Data Sources |
Transfer IQ® | Access Scorecard | Under 40 | Yes |
Capacity IQ® | Capacity, Census, Epidemiology V1 | Under 30 | Yes |
Capacity IQ® | Capacity, Census, Epidemiology V2 | Under 30 | Yes |
Classic TransferCenter | Scorecard | Under 30 | Yes |
Capacity IQ® | Capacity, Census, Epidemiology V3 | Under 20 | Yes |
Classic TransferCenter | Areas of Opportunity | Under 20 | Yes |
Transfer IQ® | Areas of Opportunity | Under 20 | Yes |
Capacity IQ® | Census | Under 10 | No |
Capacity IQ® | EVS | Under 10 | No |
Capacity IQ® | EVS Employee | Under 10 | No |
Capacity IQ® | Patient Placement | Under 10 | No |
Capacity IQ® | Patient Visit | Under 10 | No |
Capacity IQ® | Transport | Under 10 | No |
Capacity IQ® | Transport Employee | Under 10 | No |
Classic TransferCenter | Wallboard Template | Under 10 | No |
Classic TransferCenter | Facility | Under 10 | No |
Classic TransferCenter | Transfer Center | Under 10 | No |
Classic TransferCenter | Physician | Under 10 | No |
Transfer IQ® | Map | Under 10 | No |
Transfer IQ® | Transfer Center | Under 10 | No |
Transfer IQ® | Physician | Under 10 | No |
Transfer IQ® | Facility | Under 10 | No |
Interactive Dashboards
Operational Solution | Dashboard Name | Load Time (Seconds) | Multiple Data Sources |
Capacity IQ® | Capacity, Census, Epidemiology V2 | Under 5 | Yes |
Capacity IQ® | Capacity, Census, Epidemiology V1 | Under 5 | Yes |
Capacity IQ® | Transport | Under 5 | No |
Capacity IQ® | Transport Employee | Under 5 | No |
Capacity IQ® | Capacity, Census, Epidemiology V3 | Under 5 | Yes |
Capacity IQ® | Census | Under 5 | No |
Capacity IQ® | EVS Employee | Under 5 | No |
Capacity IQ® | Placement | Under 5 | No |
Classic TransferCenter | TC History Table | Under 20 | Yes |
Transfer IQ® | Areas of Opportunity | Under 20 | Yes |
Capacity IQ® | CMS History Table | Under 5 | No |
Capacity IQ® | EVS | Under 5 | No |
Capacity IQ® | Interactive Template | Under 5 | No |
Capacity IQ® | LOS | Under 5 | No |
Capacity IQ® | Patient Visit | Under 5 | No |
Classic TransferCenter | Areas of Opportunity | Under 10 | No |
Classic TransferCenter | Facility | Under 10 | No |
Classic TransferCenter | TC Cancels | Under 10 | No |
Classic TransferCenter | TC Declines | Under 10 | No |
Classic TransferCenter | TC Inbound | Under 10 | No |
Classic TransferCenter | TC Outbound | Under 10 | No |
Classic TransferCenter | Physician | Under 10 | No |
Transfer IQ® | TCIQ History Table | Under 10 | No |
Transfer IQ® | TCIQ Cancels | Under 10 | No |
Transfer IQ® | TCIQ Declines | Under 10 | No |
Transfer IQ® | TCIQ Inbound | Under 10 | No |
Transfer IQ® | TCIQ Outbound | Under 10 | No |
Transfer IQ® | TCIQ Map | Under 10 | No |
Transfer IQ® | Facility | Under 10 | No |
Transfer IQ® | Physician | Under 10 | No |
Data Sources Overview - Standard Lane - Full Historical and extracts
BTJob
BTJob is a Standard Lane data source showing Environmental Services data from Capacity IQ®. Each row in this data source shows an individual cleaning job. The unique identifier for this dataset is a combination of Instance_ID and Job_Id. This data source contains historical data and has a latency of ~45-90 minutes.
Care Progression Indicators
Care Progression Indicators is a Standard Lane data source that contains Care Progression Group information where there is at least one ordered Care Type. This data source also includes Patient Visit information associated with the Care Progression Group. The unique identifier for this dataset is a combination of Instance ID and Milestone_Entity_Id. This data source contains historical data and has a latency of 3 hours.
Care Progression Indicators Detail
Care Progression Indicators Detail is a Standard Lane data source that contains Care Type information for each Ordered Care Progression Group. This data source also includes Patient Visit information associated with the Care Progression Group. Each row consists of Milestone_Entity_Detail_Id. This data source contains historical data and has a latency of 3 hours.
Case Physician CommunicationIQ Case
Case Physician CommunicationIQ Case is a Standard Lane data source that provides a historical perspective on physician communication records originating from Transfer IQ®. It encompasses timestamps of various communication events, including the initial contact, the initial response (when the call is returned), and the most recent timestamp associated with physician disposition (Cancelled, Consulted, Declined, Accepted, Admitting). Typically, each row corresponds to a single physician communication card, and multiple rows may exist in cases where communication cards for the physician were deleted. Currently, each row in this data set is uniquely identified by Case_Communication_Id. Prior to 2022, the unique identifier for this dataset is a combination Transfer_Id and Physician_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
Census Snapshot
Census Snapshot is a Standard Lane data source that contains hourly census data for each unit. All measures and dimensions relate to the unit. The unit must have a discipline other than 'None' in order to calculate any measures. Each row in this data set represents a snapshot of a given hour for an individual unit. Each row in this data set is uniquely identified by a combination of Instance ID and Statistic_Snapshot_Id. This data source contains historical data and has a latency of ~45-90 minutes.
Discharge Milestones
Discharge Milestone is a Standard Lane data source that contains historical view of Discharge Milestones recorded, the times they are marked as delayed or completed, and any notes associated with them. Each row in this data set represents an individual discharge milestone for a patient record. The unique identifier for this dataset is a combination of Instance ID and Milestone_Entity_Detail_Id. This data source contains historical data and has a latency of 3 hours.
IQ Behavioral Health
IQ Behavioral Health is a Standard Lane data source that contains historical details about behavioral health assessments recorded in Transfer IQ®. Each row in this data set represents a single assessment in a Transfer Center Case. A transfer center case may have more than one assessment associated with it. The unique identifier for this dataset is a combination of Behavioral_Id. This data source contains historical data and has a latency of ~45-90 minutes.
IQ Case
IQ Case is a Standard Lane data source that contains transfer case data. Each row in this data set represents an individual transfer center case. The unique identifier for this dataset is a combination of Transfer_Id. This data source contains historical data and has a latency of ~45-90 minutes.
IQ Case Escalation Communication
IQ Case Escalation Communication is a Standard Lane data source that contains a historical view of escalation communication records from Transfer IQ®. It includes the times they are created, the initial contact recorded, when the call is initially returned (Initial Response), and the latest time an escalation disposition is recorded (cancelled, resolved, unresolved). Each row in this data source represents an individual escalation communication card. The unique identifier for this dataset is a combination of Transfer_Id and Staff_Id or Physician_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Escalation Communication Detail
IQ Case Escalation Communication Detail data source contains the detailed events occurring in escalation communications in each transfer center case. Each of the events in the communications history will be a unique row in this data set, including the times contacted, the times where the call is returned, and the times when any disposition is set on the communication record. The unique identifier for this dataset is Communication_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Facility Communication
IQ Case Facility Communication is a Standard Lane data source that provides a historical perspective on facility communication records originating from Transfer IQ®. It encompasses timestamps of various communication events, including the initial contact, the initial response (when the call is returned), and the most recent timestamp associated with facility disposition (Cancelled, Consulted, Accepted, Redirected and Declined). Each row corresponds to a single facility communication card, and multiple rows may exist in cases where communication cards for the facility were deleted. The unique identifier for this dataset is a combination of Facility_Id and Transfer_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Facility Communication Detail
The IQ Case Facility Communication Detail is a Standard Lane data source that captures comprehensive communication events within a facility communication card for each transfer center case. Each event in the communication history is represented as a distinct row in this dataset. It provides timestamps for contact, when a call is returned, and when any disposition is applied to the facility communication record. The unique identifier for this dataset is Communication\_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Issue
IQ Case Issue is a Standard Lane data source that contains issue data recorded in transfer cases. Each row in this data set represents an individual issue recorded for a transfer case. This data source includes deleted issues. The unique identifier for this data set is Issue_Id. This data source contains historical data and has a latency of ~45-90 minutes. This data source is joined against the IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Issue Note
IQ Case Issue Note is a Standard Lane data source that contains notes recorded against issues in a transfer case. Each row in this data set represents a note recorded in an issue. This data source includes deleted notes. The unique identifier for this data set is Issue_Note_Id. This data source contains historical data and has a latency of ~45-90 minutes. This data source is joined against the IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Note
IQ Case Note is a Standard Lane data source that contains any notes recorded to a transfer center case. Each row represents individual case notes. The unique identifier for this dataset is Note_Id. This data source contains historical data and has a latency of ~45-90 minutes. This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Physician Communication Detail
The IQ Case Physician Communication Detail data source contains the detailed events occurring in physician communications in each transfer center case. Each of the events in the communications history will be a unique row in this data set, including the times contacted, the times when the call is returned, the times conferenced, and the times when any disposition is set on the communication record. The unique identifier for this dataset is Communication_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
As pictured below, each history item recorded will be in this data source.
IQ Case Staff Communication
IQ Case Staff Communication is a Standard Lane data source that provides a historical perspective on staff communication records originating from Transfer IQ®. It encompasses timestamps of various communication events, including the initial contact, the initial response (when the call is returned), and the most recent timestamp associated with staff disposition (Cancelled, Declined, Accepted, and Consulted). Typically, each row corresponds to a single staff communication card, and multiple rows may exist in cases where communication cards for the staff member were deleted. The unique identifier for this dataset is Case_Communication\_Id. This data source contains historical data and has a latency of ~45-90 minutes.
This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Case Staff Communication Detail
IQ Case Staff Communication Detail is a Standard Lane data source that captures comprehensive communication events with a staff member within each transfer center case. Each event in the communication history is represented as a distinct row in this dataset, providing timestamps for contact, when a call is returned, and when any disposition is applied to the staff communication record. The unique identifier for this dataset is Communication_Id. This data source contains historical data and has a latency of ~45-90 minutes. This data source is joined against IQ Case data source within Data IQ®. Data IQ® Warehouse Connector customers will need to perform this join within their queries.
IQ Enterprise Structure
IQ Enterprise structure is a Standard Lane data source that contains the Enterprise Structure configured in the IQ Enterprise Settings. Each row in this data set represents a facility and what hierarchy they are placed within the Enterprise Structure(Enterprise 1 is the top most level of the Enterprise Structure and Enterprise 26 is the lowest level). This data source excludes Deleted Facilities. The unique identifier for this data set is Facility_Id. This data source contains historical data, and has a latency of ~45-90 minutes. This data source in Data IQ® is joined against the IQ Case data source in three possible ways: Referring Facility, Destination Facility, Preferred Facility. Data Warehouse Connector customers will need to perform this join within their queries.
Location Action Status
Location Action Status is a Standard Lane Data Source that contains bed location and status level information from Capacity IQ® for beds with EVS Service enabled. The granularity of this data source is at the Location Status Change Level for a given location – meaning each row represents a different status change for a bed and contains the start and end time for that status. If the status change was associated with a patient visit, then the patient visit and MRN information will be available. The unique identifier for this dataset is a combination of Instance ID, Source_Table and Source_table_primarykey_id. This data source has a latency of ~45-90 minutes.
For example, the image below shows the Actions on a two-bed location for two days. Each action has a Status Start Timestamp (when the action began) and a Status End Timestamp (when the status was removed). In the cases where there are 2 rows for an action (Dirty and Pending), both of those actions were started and completed at the same time.
The following image provides a visual example of the actions for a given location:
In this example, Bed 1 was occupied from Midnight until 4 PM, then it was set to a status of Pending/Dirty for about an hour. Then, the action status was briefly changed to action statuses of In Progress and Complete before being Clean for about 5 hours.
In the Blocked Beds standard content, users can see the currently blocked beds by looking at any location that has an Action of “Blocked” and a Status End Timestamp of “Null”. This means that the action has been set to “Blocked”, but that action has not been removed.
This same logic can be used to create a list of currently occupied patients. Any patient that has an Action of “Occupied” and a Status End Timestamp of “Null” are currently occupying a bed in the health system.
For any additional questions, please contact TeleTracking® Support.
Patient Placement
Patient Placement is a Standard Lane data source that contains data related to Patient Placements. Each row in this data source represents an individual patient placement request (Preadmits and Pending Transfers). The unique identifier for this data set is a combination of Instance_ID and Placement_Id. This data source contains historical data, and has a latency of ~45-90 minutes.
Patient Visit
Patient Visit is a Standard Lane data source that contains a historical view of patient visit records, the latest times they are admitted, marked as pending/confirmed discharge, and actually discharged. This data source also includes all the patient details dimensions. Each row in this data source represents an individual patient visit record on an instance of the Capacity IQ® solution. The unique identifier for this dataset is a combination of Instance_ID and Patient_Visit_Id. The Patient_Visit_Id is a different field from the Patient_Visit_Number. The Patient_Visit_Number is visible on the front end of the Capacity IQ® application, the Patient_Visit_Id is the unique identifier. This data source contains historical data and has a latency of ~45-90 minutes.
Transporter Employee Performance
Transporter Employee Performance is a Standard Lane data source that contains data related to hourly metrics of each transporter. Each row in this data source contains metrics for a given transporter in a given hour of the day. Details about a transporter will only be available if the transporter was logged in during those hours. The unique identifier for this data set is a combination of Instance_ID, Enterprise_User_ID, Start_Date and Hour. This data source contains historical data and has a latency of 3 hours.
TransportTracking
Transport Tracking is a Standard Lane data source which contains a historical view of metrics related to physical patient moves done by Transporters within the Capacity IQ® application. Each row in this data source represents an individual transport job. This includes both patient and item jobs. The unique identifier for this dataset is a combination of Instance_ID and TT_Job_Id. This data source contains historical data and has a latency of ~45-90 minutes.
Volume Analysis
Volume Analysis is a Standard Lane data source that contains hourly volume data for each unit. The number of admits, discharges, transfers and more are available in this data source. All measures and dimensions relate to the unit. Each row in this data set represents a snapshot of a given hour for an individual unit. The unique identifier for this dataset is a combination of Instance_ID and Statistics_Cumulative_Id. This data source contains historical data and has a latency of ~45-90 minutes.
For more detailed information, go to Data Points – Historical.
Data Sources Overview - Fast Lane
Census Current
Census Current is a Fast lane data source showing unit level Census data from Capacity IQ®. Each row in this data source contains the current census metrics. This data source has a latency of ~5-15 minutes.
Current Bed Status
Current Bed Status is a Fast lane data source showing the latest location status in Capacity IQ®. Each row in this data source is at the bed level and shows the latest bed status for that location. This data source has a latency of ~5-15 minutes.
Current BedTracking Employee Status
The Current BedTracking Employee Status data source is a fast lane data source. Each row of this data source shows the latest status of each EVS employee. There is one row per employee. This data source has a latency of 5-15 minutes. This data source should only be used with Capacity IQ® or Capacity Management Suite® version 2021.1 or earlier.
Current Care Progression Indicators Detail
The Current Care Progression Indicators Detail is a Fast Lane data source that contains Care Type information for each Ordered Care Progression Group. This data source also includes Patient Visit information associated with the Care Progression Group. Each row consists of Milestone_Entity_Detail_Id. The fast lane data source retains records where the patient's discharge date time is less than 72 hours, and patients who are in the Current Patient Visit (fast lane data source). This data source is subject to a latency period ranging from 5-15 minutes.
Current Discharge Milestones
The Current Discharge Milestones data source includes discharge milestones for active patients as well as milestones that have been updated within the last 72 hours. Each row in this data set represents an individual discharge milestone for a patient record. This data source has a latency of 5-15 minutes.
Current IQ Case
Current IQ Case is a Fast Lane data source that contains transfer case data. Each row in this data set represents latest record for a transfer center case and retains all the cases where the Case Complete Timestamp is within the last 72 hours or is null. This data source has a latency of ~5-15 minutes.
Current Patient Attributes
Current Patient Attributes is a Fast lane data source showing the start and end times for the Patient Attributes associated with the patients from Capacity IQ®. Each row in this data source contains the start and end times for a given attribute for a given patient. This data source retains records for all patients in the Current Patient Visit data source. The Current Patient Visit data source includes all current patient visit records (in house, preadmit, pending transfer, pending discharge, confirmed discharge) and those in a discharged and cancelled status for the last 72 hours based on the latest last mod date. This data source has a latency of ~5-15 minutes.
Current Patient Placement
Current Patient Placement is a Fast lane data source showing placements data from Capacity IQ®. Each row in this data source contains an individual placement record. This data source excludes requested (but not activated), completed and cancelled placements older than 72 hours based on bed request timestamp. This data source has a latency of ~5-15 minutes.
Current Patient Visit
Current Patient Visit is a fast lane data source that contains current patient visit records, the latest times they are admitted, marked as pending/confirmed discharge, and actually discharged. This data source also includes all the patient details dimensions. This data source has a latency of 5-15 minutes. The fast lane data source retains discharged and visit cancelled records for the last 72 hours based on the latest last mod date. The date limitation does not apply to records that are in confirmed discharge, In house, Pending Discharge, Pending Transfer, or PreAdmit Status.
Current Transfer Milestones
Current Transfer Milestones is a Fast Lane data source that includes transfer milestones that have an active record in the Current Patient Placement data source. The Current Patient Placement data source includes all current patient placement records. The Current Patient Placement data source excludes requested (but not activated), completed and cancelled placements older than 72 hours based on bed request timestamp. Each row in this data set represents an individual transfer milestone for a patient placement record. This data source has a latency of ~5-15 minutes.
Current TransportTracking
Current TransportTracking is a Fast lane data source. It contains active Capacity IQ® jobs, as well as those jobs last modified within the last 72 hours. This data source has a latency of 5-15 minutes. Each row of this data source represents an individual transport job. This includes both patient and item jobs.
Current TransportTracking Employee Status
The Current TransportTracking Employee Status data source is a fast lane data source. Each row of this data source shows the latest status of each Capacity IQ® employee. There is one row per employee. This data source has a latency of 5-15 minutes. This data source should only be used with Capacity IQ® or Capacity Management Suite® version 2021.1 or earlier.
EVS Streaming
The EVS Streaming data source is a Fast lane data source showing Environmental Services data from Capacity IQ®. This data source has a latency of ~5-15 minutes. Each row in this data source shows an individual cleaning job. Bed cleaning jobs that were updated within the last 72 hours.
For more detailed information, go to Data Points – Fast Lane.
Data Source Extracts
What are Data Source Extracts?
Data source extracts are a way to store data within the Tableau Server, instead of within Snowflake, to allow for significantly faster content load times.
This means that for dashboards connected to an extract, they will no longer need to ‘call’ Snowflake each time a dashboard loads.
The data is already in a place that Tableau can quickly access.
Any time you want to load a dashboard Tableau will ‘call’ Snowflake and pull this data back into Tableau.
This is a time-consuming process and a frequent cause of slow load times.
What’s happening?
We are implementing data source extracts into Data IQ® for standard lane throughput data
This will allow interactive throughput reports using a Data Source Extract to open up to 15x faster.
How fast will my historical throughput dashboards load?
In testing the majority of historical throughput dashboards open within 3-7 seconds
This will vary depending on data volume and complexity.
This is significantly faster than previous live connection load times.
Using multiple data sources, blending, or poorly designed dashboards can result in longer load times.
When did this change occur?
We implemented Data Source Extracts to all customers in mid-January 2024.
Why did we implement Data Source Extracts?
Previously our interactive throughput reports that pull historical data would have load times between 30 seconds and 1 minute.
Using data source extracts the dashboards will load significantly faster.
Example:
Census Historical Trends typically takes 45 sec - 1 min to load.
Using an extract it loads within 5 seconds.
The Performance Trending Scorecard can take over 1 min to load.
Using an extract it loads within 7 seconds.
How often are Data Source Extracts updated?
The extracts will be updated at 2AM each night and include data from the last 2 years (730 days) through the entirety of the previous day.
Example – On November 17, 2023 you would have data from November 16, 2021 through November 16, 2023.
Extracts will not be updated more than once per day.
What if my health system spans multiple time zones?
The refresh time will be managed on a customer-by-customer level depending on the number of time zones spanned.
This will ensure clients in the Pacific time zone will have their data refresh at the appropriate time and include the full prior day’s data.
Extract will be based off of tenant time zone.
Same as existing Snowflake set up.
Relative date calculations functionality will not change.
Why are we including 2 years of data in Data Source Extracts?
In testing, we found that the vast majority of use cases for historical data were within the last year with a much smaller amount being within the last two years. Use cases beyond two years were seen to be exceptionally rare. As we need to be cognizant of data storage and Data Source Extract refresh time we decided to limit these to two years.
What happened to my dashboards that were using the standard lane historical connections?
We replaced the existing standard lane data source connections with Data Source Extracts.
Any dashboard that was connected to a standard lane connection was automatically switched to extract data.
All fields within the previous live connection still exist in the new Data Source Extract connection.
This means that if you were using a standard lane data source to show data for the current day you will need to switch to either using the new ‘FULL\_HISTORICAL’ data source or a fast lane connection.
What if I want to use the standard lane data that contains data for today?
In this case you will need to edit your dashboard and replace the Data Source Extract connection with the new standard lane connection that contains ‘FULL_HISTORICAL’ in the data source name.
This standard lane data source containing the name ‘FULL_HISTORICAL’ will function how our standard lane connections have in the past.
If you only need data for the current day, it is recommended you use our Fast Lane data sources.
What is happening with fast lane data sources?
We are not changing anything with the existing fast lane connections.
What is happening with Transfer IQ® and On-Premise TransferCenter data?
This data will not be changed and will continue to operate how they currently do.
What will be the 3 types of data source connections?
Fast Lane
These will continue to operate as they always have.
They will update about every 5 min.
Command Center dashboard data source connections.
Historical Extracts
These will replace the existing connections of Standard Lane data sources.
They will be updated every night.
They contain historical data from the last 2 years (rolling 730 days) through the end of the previous day.
Standard Lane Full Historical
These were added to the Data IQ® server as new data source connections.
These function the same as our current form of standard lane connections.
They have the words ‘FULL_HISTORICAL’ in the data source name.
They contain the entirety of your historical data from the beginning of implementation with TeleTracking to the current day.
What are the use cases for each type of data source?
Fast Lane - “What’s happening right now.”
This data is to be used for current operations status.
Contains 48-72 hours of data depending on data source.
Current as of about 5 min in the past
Historical Extracts - “How have we been trending over the last 2 years?”
Use for historical analysis of data within the last 2 years.
Contains data from 2 years ago (rolling 730 days) through the end of the previous day.
Standard Lane Full Historical - “How much have we improved since 2016?”
Use for historical analysis when you need to see all data since implementation.
Contains data from implementation to the current day.
How can I tell if I’m connected to an Extract?
Live connections have a single cylinder symbol to show their connection type.
Extract connections have a double cylinder symbol to show their connection type.
Data Source Type Diagram
This diagram shows what data is retained and the refresh rate of each type of data source.
Specific Data Source Layout
This table shows each ‘lane’ of data available for our data sources.
Export Data
You can export data from your dashboards using tabular reports. Tabular reports present the data points and metrics you see on an interactive dashboard in a table format.
Currently, tabular reports are available for the following dashboards:
BTJobs
Transport Tracking
Patient Placement
Patient Visit
Census Snapshot
From a report, you can edit your view of the data by including or excluding specific objects. To do so, click on a data point and hover over it:
With the dialog that appears, you can:
Exclude the object from your report
Sort by ascending
Sort by descending
View data from just the selected object
You can also filter your report using the filter panel:
The availability of filters and the option to edit the view of your data varies between reports.
Download a Tabular Report
Navigate to Interactive Reports > Throughput > Capacity Management Suite® History Table.
Select the tabular report that corresponds to the data source you want to know more about.
EVS History is linked to BTJobs.
Set your filters to download the exact data you need.
Click Download from your toolbar, then Data from the Download dialog.
This opens the View Data page. This page separates your rows of data into a Summary tab and a Full data tab. The tab you're on when you download your dataset will dictate the data points you see in your CSV file, so check the preview in each tab to ensure that you're downloading what you need.
Select either the Summary or Full data tab from the View Data page and click Download all rows as a text file.
A CSV download will appear in your browser and from here, you'll be able to edit or format your data to suit your hospital's needs.
Ask Data
Ask Data is a feature that utilizes natural language processing to let users ask data questions and in response, Data IQ® will create a basic visualization to answer the question.
In this example, we'll be asking the question: "what is overall turn time by campus?".
Click Ask Data from the toolbar of your selected dashboard.
If applicable, after this step you may be asked to select a data source to base your answer off of (this will depend on whether or not your dashboard was linked to multiple data sources).
Enter your question in the search box.
A visualization will automatically be created to satisfy your request. You can also use the drop-down in the upper right-hand corner to change the type of visualization displayed.
Explain Data
This feature allows users to discover potential reasons for why certain data points are higher or lower than expected. For instance, if Average Response Time is higher than usual on a given day you could use the Explain Data feature to learn potential reasons for the increase in response time.
The information provided in the Explain Data window will offer potential explanations as to why certain data points are higher or lower than expected using AI-driven insights.
Date Formatting
The date format of datetime and date fields within Data IQ® are displayed based off of your browser's language setting.
At a high level the US uses the MM-DD-YYYY format, while most other countries use DD-MM-YYYY.
In this example, MAX(Createddate) is used to display the date formatting.
If the browser language is set to 'English (United States)' the date format will appear similar to the image below.
To transform date records into the more widely used international format the browser language should be set to a non-US option. By choosing 'English (United Kingdom)' , the date format will appear similar to the image below, where the date is displayed as DD-MM-YYYY.
Update Browser Settings
To edit these settings in Chrome, follow the steps below:
Navigate to chrome://settings/languages
Find the Preferred Languages section at the top of the screen. Add your preferred language. For example English (United Kingdom) will adjust the date format. You can also move your preference to the top if it is not already there.
Auto Refresh
Refresh Timer
Every data source in the Data IQ® contains the parameters necessary to add a refresh timer to a report. The timer will automatically update your data source every 60 seconds.
Click Edit on an existing report or create a new one.
From the Dashboard tab, click Web Page under Objects.
Enter the URL below inside the Edit URL dialog to publish the workbook to the current environment server
Save and publish your workbook.
Once the workbook is published, the refresh timer will appear on your server and update your connected data source every 60 seconds.
Override Default Refresh Values
The refresh timer uses parameters that you can customize to modify the default auto-refresh settings. Using any of the values below to edit the timer's behavior without editing the server's JavaScript files.
To edit a parameter, select a value from the drop-down below and create a parameter in Data IQ®. Name the parameter after the value you want to change (e.g. autoRefresh_seconds) and save it to apply your preferred settings.
Refresh parameters:
autoRefresh_seconds
Data type: integer
Default value: 600
autoRefresh_radius
Data type: Float
Default value: 30
autoRefresh_direction
Data type: String
Default value: cw
autoRefresh_smooth
Data type: Boolean
Default value: True
autoRefresh_fontSize
Data type: Float
Default value: undefined, calculated from radius (30/1.2 = 25)
autoRefresh_fontWeight
Data type: Integer
Default value: 700
autoRefresh_fontColor
Data type: String
Default value: #ffffff
autoRefresh_fontFamily
Data type: String
Default value: Sans-serif
autoRefresh_label
Data type: String
Default value: second, seconds
autoRefresh_strokeWidth
Data type: Float
Default value: undefined, calculated from radius (30/4 = 7.5)
autoRefresh_strokeStyle
Data type: String
Default value: #477050
autoRefresh_fillStyle
Data type: String
Default value: #22AD71
Data IQ® Warehouse Connector Feature
What Is the Data IQ® Warehouse Connector Feature?
The Data IQ® Warehouse Connector is a feature of the Data IQ® product and cannot be purchased as a stand-alone solution. It serves as a gateway for customers to access historical information that is warehoused in a TeleTracking data repository for reporting and analytics. The benefit of this solution is to give customers the ability to extract patient logistics information using industry-standard languages (ODBC and JDBC) and integrate it with other data from various other sources (clinical, financial, etc.) that are beyond the solutions provided by TeleTracking.
Data IQ® Warehouse Connector allows historical access to all Data Points labeled as Standard Lane. Details on those Data Points are available here.
Customer Requirements
The Data IQ® Warehouse Connector feature of the Data IQ® product requires the customer to provide the technical staff to take advantage of the available capabilities. This section defines the responsibilities of TeleTracking and the responsibilities of the customer.
If your company signed a contract for Data IQ® prior to December 2021, you will need a contract addendum to access the Data IQ® Warehouse Connector feature. Please coordinate with your Customer Representative for additional information.
TeleTracking Responsibilities
Provide a form to collect information that will be used to set up the appropriate accounts and background to support the customer during the term of the agreement.
Create accounts in accordance with the information provided.
Maintain a historical repository of customer information against which customer queries can be run on an ad hoc or periodic basis.
Provide a mechanism to reset passwords for the designated customer accounts.
Provide client support in aiding the customer to connect to the designated repository.
Provide client support in aiding the customer to understand which data fields are appropriate for different use cases.
Monitor utilization of the system and alert the customer if utilization is exceeding the specifications defined in the contract.
Customer Responsibilities
Provide necessary information as requested on the Data IQ® Warehouse Connector Account Request Form. Please contact your TeleTracking Commercial Representative for access to the form.
Provide adequate technical resource(s) that are knowledgeable in the following areas:
Programming queries using ODBC/JDBC.
Setting up connections to a Snowflake database.
Managing the transformation of data to align with other data sources if migrating data into an internal enterprise data warehouse.
Provide data access and reporting tools to the data once extracted from TeleTracking systems.
Comply with utilization specifications as defined by TeleTracking in the contract.
Maintain and manage all mandatory PHI requirements with the data extracted using the Data Warehouse Connector.
Expected General Use
The Data IQ® Warehouse Connector is designed to allow customers to extract data directly from the Snowflake database via a standard query language (ODBC/JDBC).
It is expected that as the customer first starts using this capability, they will run exploratory queries trying to determine what data is needed.
When the customer has determined the data need, they will run a large extract for all of their historical data. The contractual constraints do not apply to this one-time extract.
Going forward, TeleTracking expects the customer to run a batch job that performs an incremental extract to get updated data (the assumption is that this will be a daily run). These incremental extracts must not exceed usage limits on a recurring basis.
Chronic abuse of the utilization of the Data Warehouse Connector can result in permanent disabling of the account(s).
Next Steps
When you confirm that your organization has the resources required to use Data Warehouse Connector, contact your TeleTracking Commercial Representative. They can make sure that the following issues are addressed:
The appropriate Legal Contract Addendum must be signed.
The Data Warehouse Connector Account Request form is available to you in Knowledge Bridge.
How to Connect to Snowflake
When you have completed the prerequisite requirements, use the following section to proceed with the Data IQ® Warehouse Connector setup.
Client Versions and Support Policy
Operating Systems supported from Snowflake are found on their requirements page, minimum and recommended client versions, and Snowflake support policies.
Connecting Directly Using Drivers and Connectors
Detailed instructions for installing, configuring, and using the Snowflake-provided drivers and connectors for JDBC, ODBC, Python, Spark, Go, and other clients are available on their drivers page.
JDBC Driver
Sample JDBC Connection String:
jdbc:snowflake://<account_name>.snowflakecomputing.com/
?user=<username>&password=<password>
&warehouse=<warehouse_name>&db=<database_name>
&schema=<schema_name>
# For example:
jdbc:snowflake://xy12345.us-east-1.snowflakecomputing.com
/?user=<username>&password<password>
&warehouse=BI_DWC_WH&db=BI_DWC_DB&schema=DWC
Reference – Connect to Snowflake via JDBC
ODBC Driver
Installation and Configuration
Sample ODBC Connection String:
Driver={SnowflakeDSIIDriver};Server<account_name>
.snowflakecomputing.com;Database=<database_name>;
uid=<username>;pwd=<password>;Schema=<schema_name>;
Warehouse=<warehouse_name>
# For example:
Driver={SnowflakeDSIIDriver};Server=xy12345.us-east1.snowflakecomputing.com;
Database=BI_DWC_DB;uid=<username>;
pwd=<password>;Schema=DWC;Warehouse=BI_DWC_WH
.Net Driver
Sample .Net Connection String:
account=<account_name>;user=<username>;
password=<password>;db=<database_name>;
schema=<schema_name>;warehouse=<warehouse_name>
# For example:
account=xy12345.us-east-1;user=<xxxxxx>;
password=<xxxxxx>;db=BI_DWC_DB;
schema=DWC;warehouse=BI_DWC_WH
Other Drivers and Connectors
Snowflake Web UI
Popular IDEs to Connect to Snowflake
Data Integration / ETL tools
Talend
Matillion
Amazon Glue
Fivetran
Connect to Snowflake Using TablePlus
Install TablePlus from: TablePlus | Modern, Native Tool for Database Management.
Open TablePlus and follow the steps to Create a new connection.
Select Snowflake, and then click Create.
Configure the Snowflake connection as shown in the following images:
Click Test to ensure the connection is successful.
View the data by doing one of the following:
Click the views on the left pane.
Open the SQL Editor and run a query (remember to user the 'BYOX' schema).