Defining Rapid Insight

Michael LaracyWe are often asked by people what makes Rapid Insight different- both as a company and also how our products are different.  To answer that question, we’re reposting this interview with Rapid Insight CEO, Mike Laracy, who talks about what drove him to start the company and how we provide an easier to use solution compared to many traditional predictive modeling software tools and services.

Rapid Insight has been around since 2002. Can you tell us a bit of the story on how the company came to be?
I had been doing a lot of work in the analytic space using software tools like SAS and SPSS.  I found predictive modeling to be such a clunky, painful process and I knew there had to be a more efficient way to analyze data and build predictive models.   Working as an analytic consultant, I had the opportunity to see how lots of companies were interacting with their data.  Even the large Fortune 500 companies were struggling to analyze their data and build models.  The problem was that the only tools available were tools that had been developed decades earlier for programmers and academic researchers.

I had been living in Boulder, Colorado when I developed the concept of Rapid Insight.  I spent a lot of time thinking through the predictive modeling process and figuring out how it could be automated and streamlined.  I sat on the concept for a couple of years before actually starting the company.

In 2002 I had moved here to North Conway and decided to rent some office space to start developing the concept of Rapid Insight into an actual software product.  For the first six months it was just me.  I spent that time writing the algorithms and developing a working prototype.  I wasn’t a programmer and I knew that to turn the software into a commercial application, I’d need more help.  I hired a software developer who is still with the company today as our lead engineer.   A year later we hired another developer.  In 2006 we hired our first salesperson, launched Rapid Insight Analytics, and we’ve been growing ever since.

Do your products focus exclusively on predictive analytics?
Our products also focus on ad hoc analysis and reporting.  In 2008, we launched our second product called Veera.  Whereas Rapid Insight Analytics automates and streamlines the process of predictive modeling and analysis, Veera focuses on the data.  Data is typically scattered between databases, text files and spreadsheets, with no easy way to organize it and piece it together for modeling and analysis.  Veera solves that problem.  It’s a data agnostic technology that allows access to any database and any file format and makes it easy for people to integrate, cleanse, and organize their data for modeling, reporting, or simply ad hoc analysis.

We initially developed this technology as a tool to organize data for predictive modeling.  We’re now seeing enormous demand for the tool as a standalone technology as well.  Colleges and universities use it for reporting and ad hoc analysis.  Companies like Choice Hotels and Amgen use it for processing analytic datasets with data coming from disparate sources.  Healthcare organizations are using it for reporting and performing ad hoc analyses on their databases.  Defense contractors are using it for cyber security.

What makes your company different from others working in the higher ed space?
In higher ed there are consulting companies that provide predictive modeling services.  You send them your data, and they build a model and send you back the model and a report.  But the institution still has to do the prep work to create the analytic file, which is 90% of the effort.  This process is both expensive and time-consuming, and the knowledge gained from the analysis isn’t always transferred back.   By bringing predictive modeling in-house, changes can be made on the fly without having to send data anywhere and models can be changed and updated very quickly, which is important because modeling is such an iterative process.

We provide schools with a means of doing this analysis and building their own models.   One advantage is that the knowledge is always captured internally.  But the biggest advantage is the ability for institutions to be able to ask questions of their data and answer them on the fly.

As far as other software products that are being used in higher ed, we’re very different from tools like SAS or SPSS in that the users don’t need to be programmers or statisticians to build models using our tools.  I think if you ask the question of our customers you’d find that one of our biggest differentiators from these types of products is our customer support.  Our analysts are available to help our clients with any questions as they build models, analyze data, or create reports.  Whether the questions pertain to using our technology or about interpreting the results, we are always available to help.  We want to ensure that our customers grow their own analytic sustainability.

Where does predictive modeling fit into the analytic ecosystem in higher education?

Within the analytic ecosystem in higher ed, there is a range of ways in which data is analyzed and looked at. On one side, you have historical reporting, which our clients do a lot of and is vital to every institution.  Somewhere in the middle is data exploration and analysis, where you’re slicing and dicing data to understand it better or make more informed decisions based on what happened in the past.  On the other side of the spectrum is predictive modeling.  Modeling requires taking a look at all of the variables in a given set of information to make informed predictions about what will happen in the future. What is each applicant’s probability of enrolling or what is each student’s attrition likelihood?  What will the incoming class look like based on the current admit pool?  These are the types of questions that are being answered in higher ed with predictive analytics.  The resulting probabilities can also be used in the aggregate. For example, enrollment models allow you to predict overall enrollment, enrollment by gender, by program, or by any other factor.  The models are also used to project financial outlay based on the financial aid promised to admitted applicants and their individual enrollment probabilities.

Higher education has come a long way in the last five to ten years in its use of predictive analytics. The entire student life cycle is now being modeled starting with prospect and inquiry modeling all the way through to alumni donor modeling.   It used to be that any institutions that were doing this kind of modeling were relying on outside consulting companies.  Today most are doing their modeling in-house.  Colleges and universities view their data as a strategic asset and they are extracting value from their data with the same tools and methodologies as the Fortune 500 companies.

What kinds of resources are needed and what is the first step for an institution who wants to become more data-driven in their decision making?

It’s important to have somebody who knows the data. As long as a user has an understanding of their data, our software makes it very easy to analyze data and build predictive models very quickly. And our support team is available to answer any analytic questions.

Gaining access to their data is the first step. We see a lot of institutions that have some reporting tools which don’t allow them to ask new questions of the data. So, they might have a set of 50 reports that they’re able to run over and over but anytime someone has a new question, without access to the raw data there’s no way to answer the question.

It really helps if the institution is committed to a culture of data driven decision making.  Then all the various stakeholders are more focused on ensuring data access for those doing the predictive modeling.

What do you say to those who are on “the quest for perfect data”?  Is it okay to implement predictive analytics before you have that data warehouse or those perfectly cleansed datasets?

No institution is ever going to have perfect data, so you work with what you have. We suggest seeing what you have, finding any obvious problems in the data, and then fixing those problems the best you can. We’ve designed our solutions such that a data warehouse is not required but, even with a clean data warehouse, the data is never going to be perfect.   As long as you as you have an understanding of the data, you can move forward.

In your experience, which models in higher education produce the highest ROI?
We have a customer, Paul Smith’s College that has quantified their retention modeling efforts. Using their model results, they put programs into place to help those students that were predicted to be high-risk of attrition. They credit the modeling with helping them identify which students to focus on, saving them $3m in net tuition revenue so far.

We have other clients that are using predictive modeling on the prospect side and they’re realizing significant savings on their recruiting efforts. So instead of mailing to 200,000 high school seniors, they’re mailing to 50,000, and realizing significant savings by not mailing and not calling those students who have pretty much zero probability of applying or enrolling.

Although not as easily quantifiable, enrollment modeling has a pretty big ROI.  Not only on determining which applicants are likely to enroll, but in predicting class size.  If an institution overshoots and enrolls too many applicants, they’ll have dorm, classroom, and other resource issues.  If enroll too little, they’ll have revenue issues.  So predicting class size and determining who and how many applicants to admit is extremely important.

What are some common mistakes you see when approaching predictive modeling for your higher ed customers?

One mistake that I often see is when information is thrown out as not useful to the models.  Zip code is a good example. Zip code looks like a five digit numeric variable, but you wouldn’t want to use it as a numeric variable in a model. In some cases it can be used categorically to help identify applicants’ origins, but its most useful purpose is to for calculating a distance from campus variable.  This is a variable that we see showing up as a predictor in many prospect/ inquiry models, enrollment models, alumni models, and even retention models.  Another example of a variable that is often overlooked is application date.  Application date often contains a ton of useful information if looked at correctly.  It can be used to calculate the number of days between when the application was sent and the application deadline.  This piece of information can tell you a lot about an applicant’s intentions.  A student who gets their application in the day before the deadline probably has very different intentions than a student who applies nine months before the deadline.  This variable ends up participating in many models.

To get our customers up to speed on best practices in predictive modeling we’ve created resources like lists of recommended variables for specific models and guides on how to create useful new variables from existing data.

Experience Rapid Insight

No risk, all reward! Download a free, fully functional 14-day trial of Veera Workstation and Rapid Insight Analytics today, and a member of our analyst team will help you get the most out of your trial. 



Experience Rapid Insight

No risk, all reward! Download a free, fully functional 14-day trial of Veera Workstation and Rapid Insight Analytics today, and a member of our analyst team will help you get the most out of your trial.