United Community Banks Accelerates Innovation With Smarter Approach to Data

To invest in its future success, this popular bank fully rebuilt its data stack with Coalesce at the core

Company:
United Community Banks, Inc.
HQ:
Greenville, SC
Industry:
Banking
Employees:
3,000
Top Results:
75%
reduction
in time to model complex JSON files with help of native Coalesce JSON parser (from 4 weeks to 1), freeing up developer time
4x
faster
Parquet file loading using Snowflake’s vectorized scanner, a new feature immediately supported by Coalesce
2x
faster
node creation with Coalesce templates, streamlining repetitive development tasks

“Coalesce’s simple interface lowers the barrier of entry to get started, and that ease of use is a great accelerator for us.”

Andrew Crisp
Director of Enterprise Data Services, United Community Banks

Originally founded in the mountains of North Georgia to provide banking services to local farmers, today United Community Banks, Inc. (UCBI) is the largest headquartered bank in the state of South Carolina, with assets of just under $30 billion. This full-service bank, which celebrates its 75th anniversary this year, has a footprint across most of the Southeastern United States, with more than 200 branches in North and South Carolina, Georgia, Tennessee, Alabama, and Florida. Despite its enormous growth and its modern digital offerings, the bank has not forgotten its roots, and remains dedicated to building strong personal relationships with its customers; in fact, United Community has been ranked by JD Power as #1 in customer satisfaction for consumer banking in the Southeast 11 out of the last 16 years.

Dealing with disconnected data

Challenges

Data was siloed and fragmented across the organization
Limitations of legacy technology made it difficult to grow and scale
Data users throughout different departments lacked the big picture of what data was available to them

Andrew Crisp is the Director of Enterprise Data Services for United Community. In addition to Data Architect Kelly White, his data team includes several data engineers as well as BI and data analysts. The team processes all data flowing into and out of the bank, including digital banking, nightly core banking, loan applications, marketing, and even social media data.

When Crisp and White first joined United Community about three years ago, the company was solely using on-premises versions of SQL Server, SSIS, and Alteryx. “We’ve spent the last two years trying to modernize our offerings as a team to serve our community as well as internal customers,” Crisp says. “We’ve been leaning more into the cloud, focusing on what we can build and provide that has business value.”

According to Crisp, one big challenge the team faced initially was that the bank’s data was siloed and fragmented across the organization: “It was difficult to tell a cohesive data story because we had six or seven different SQL Servers, with a host of databases on each across a bunch of different schemas.” This was especially problematic given that there was a distributed network of data professionals across the organization who were using data, but were not part of Crisp’s team. “These users were comfortable in the few things they knew how to do, but they didn’t necessarily understand the bigger picture of what data existed at the bank,” he explains. “So we needed to figure out how to share and democratize that data so they could do their work more effectively.”

In addition, the team’s legacy technology made it difficult to grow and scale. They had ambitions to grow, but as Crisp explains, some of the solutions they had inherited from years past were reaching end of life. “Along with our Chief Data Officer, Kelly and I started strategizing how to streamline our initiatives and start thinking of the bigger picture,” he says. “How could we still meet the needs of our daily operations and ‘keep the lights on,’ while at the same time adjusting and rebuilding our data architecture so we could move toward a scalable future?”

Banking on a better approach

Solution

Adopted Snowflake as the organization’s single source of truth for all data
Brought on Coalesce as data transformation solution
Prioritized a clean, methodical migration from legacy on-prem solutions in order to avoid tech debt

Crisp and White decided that, first and foremost, they needed to get their data out of their on-prem data warehouse and into the cloud. For Crisp, Snowflake seemed like the best choice: “I came into the bank talking about Snowflake. I had used it at a previous company, and so was very familiar with its capabilities and the great impact it had had there.” He chose Coalesce as the data transformation component of their new data stack at the same time. “We looked at a few solutions in this space and decided that Coalesce was the right choice for what we were looking to do,” says Crisp. “Coalesce’s simple interface really lowered the barrier of entry to get started, and that ease of use was a great accelerator for us.”

Crisp explains that they initially moved slowly and methodically in order to set everything up correctly from the start. “We wanted to make sure we did it the right way,” he says. “We didn’t want to create tech debt or have to rework things in the future. So we tried to time everything carefully—we turned on Snowflake in October of 2024 and worked on some of the onboarding. Once we had the environment secured, we started using Coalesce to transform data in Snowflake.”

Over the past six months, Crisp and White have been ramping up their new data stack, and starting to get other team members more involved. “With Snowflake and Coalesce in place, we’re finally overcoming some of those barriers we’ve been facing for a long time now. We’re starting to see some early successes and a newfound agility that these cloud-based tools can provide.”

Cashing in on innovation

Results

75% reduction in time to model complex JSON files with help of native Coalesce JSON parser
4x faster to load Parquet files using Snowflake and Coalesce
Transforming and cleaning data upstream minimizes issues for data consumers

One of the team’s early success stories happened shortly after they first onboarded Coalesce. “We’d gotten some JSON files from a vendor, and one of our developers spent about three or four weeks trying to parse it using our legacy tooling,” Crisp recalls. “After a lot of back-and-forth with the vendor, and looking into whether there was a specific add-on we could purchase for SQL Server, this dev decided to try to do it in Snowflake. He loaded up the file and through Coalesce’s JSON parsing functionality, it only took him a week to do the initial breakout of the nodes.” Crisp notes that this has freed his employee up to do other important things, whether that’s delivering on other projects or improving different processes. “What I don’t want him doing is spending a month mapping, breaking apart, and parsing something that Coalesce can do in a fraction of the time,” he says.

Kelly White acknowledges that while he is pretty new to Coalesce himself, he has already become a big fan of the platform. “I love the ease of the SaaS setup,” he says. “Our team can just basically log in and start working. Some products I’ve worked with at other places didn’t work that way—you needed to waste a lot of time fixing the plumbing, and it was like having a new car that was always in the shop and that you couldn’t use.”

White also appreciates how seamlessly Coalesce integrates with Snowflake: “I’ve noticed that as Snowflake features roll out, they seem to be available right away in Coalesce. For example, I’ve been working with Parquet file loading, and we heard from our Snowflake rep that they now have a vectorized scanner for Parquet file loading. We reached out to the Coalesce team and they created a node to do this immediately. This node allows us to load Parquet files four times faster using Snowflake, something we’re excited about because some of our data loads are very large.”

The Data Services team hosts Snowflake on Azure, where they are using Azure Data Lake Storage. Most of their ingestion sends things directly to Azure Data Lake, then they use Coalesce to bring that data into Snowflake. “Initially, Kelly and I were unsure how we were going to get the data from the data lake into Snowflake,” Crisp recalls. “We started looking in Coalesce Marketplace, where we found the CopyInto Node (which automates the process of copying files from object storage into Snowflake), installed it in a few clicks, and then were off to the races. Now we’re starting to tinker with the idea of creating templatized nodes that meet the specific needs of our design methodologies. It’s really cool to be able to create a new Node Type off an existing one and make adjustments to it, not to mention having access to the vast amount of Nodes and Packages available from Coalesce Marketplace.”

Crisp says that his developers are excited to start working with Coalesce. “One of the cool things about Coalesce is that it provides a great, repeatable framework. Kelly has done a lot of work making sure that we have a step-by-step guide and a framework on how we plan to utilize the tool in our environment. Coalesce makes that extremely easy to repeat over and over again. Now that we hammered out some of those initial details in our first pipeline with one developer, I’m excited to see how quickly we can get our other developers up to speed.”

Crisp says that part of what motivates the team is not just that Coalesce is so easy to use, but also that it is so enjoyable to build with. “When I show Coalesce to someone for the first time, they don’t seem overwhelmed or intimidated by it; instead, they’re eager to try it out, to poke around and create a node,” he says. “I have one developer who’s starting to play around with Coalesce call me on a Saturday. He was bored, I was bored, and so we started working together on a weekend because Coalesce is so fun to use. He got really excited about it. We were able to reference the initial pipeline we’d already built and replicate a lot of it, taking advantage of templates and Jinja code to manipulate the objects in our pipeline in a way that would not have been so easy before.”

White believes that by using Coalesce, the team will continue to be able to work faster and faster. “We’re pulling in a wide spectrum of data—XML, JSON, Parquet—and each one has its own build pattern,” he explains. “Our data sources vary from dirty to clean data, so the transformation rules we want to apply vary greatly across different data sources. We’re documenting those rules and patterns, so when new people join, they’ll be able to say, ‘Oh, I’ve got data in JSON storage. Here’s the wiki, here’s what I need to pay attention to. Here’s how I make bulk changes in Coalesce quickly.’ I can see us just getting much faster as we get more comfortable with the platform, but it has also given us the chance to do it right the first time with a pattern.”

White predicts that one of the best outcomes of all the changes the team has put in place is something downstream data consumers may not even notice: “Because we’re spending more time upfront transforming the data from different sources and cleaning it up, they’re going to see fewer data anomalies and problems. We can optimize the data once and then it’s finished, so those users don’t have to deal with data issues.”

According to Crisp, the work his team is doing is starting to create a positive buzz across the entire bank, as more and more groups come to them asking for help with data projects. All this means that United Community is on the road to becoming a truly data-driven organization. “I guess Kelly and I did a really good job selling the vision because the business is excited,” he says. “They’re bought in and energized around the possibilities of what all these solutions can help us provide. And I’m excited because I think it’s going to give us the next real evolution of what our self-service data community is able to accomplish—that’s a big win.”

Start Building Data
Projects 10x Faster

Experience the power of Coalesce with a free 14-day trial.