What we can learn from famous data quality disasters in pop culture

0

(Studiostock/Shutterstock)

Bad data can lead to disasters that cost hundreds of millions of dollars or, believe it or not, even the loss of a spacecraft.

Without processes that protect the integrity of your data every step of the way, your organization could experience catastrophic errors that erode trust and cost a fortune. To remind you that high-quality data is an end-to-end priority for all types of industries, let’s take a look at some of the biggest data quality incidents in recent pop culture history.

NASA: Lost Mars Orbiter Worth $125 Million Due To Data Units Error

In 1999, NASA lost a $125 million Mars orbiter because Lockheed Martin’s engineering team used imperial units while NASA used metric units.

Because the data units did not match, the information could not be transferred from the Lockheed Martin team in Denver to the NASA flight team in Pasadena, California. Without good data, NASA’s flight team was unable to land the orbiter on the Red Planet, and the hundred million dollar tech crashed to the surface.

Artist’s rendering of the lost Mars Climate Orbiter (Image courtesy of NASA/JPL/Corby Waste)

IT Chronicles reports: ‘The problem was that software provided by Lockheed Martin was calculating the force the thrusters had to exert in pounds of force – but a second software, provided by NASA, took the data assuming it was in the metric unit, newtons. This led to the craft plunging 105 miles closer to the planet than expected – causing it to be completely incinerated, setting NASA back years in its quest to learn more about Mars, and a 327.6 million mission. dollars burned in space.

But NASA isn’t alone in battling bad data. According to IT Chronicles, 60% of companies have untrustworthy data health. Although the stakes are not as high as landing a spaceship, you must take steps to maintain the integrity of your data.

Tom Gavin, the administrator of the Jet Propulsion Laboratory to whom all the project leaders reported, said of their mishap: [was] an end-to-end process problem. … Something went wrong in our system processes in the checks and balances we have that should have detected this and fixed it.

So what should be done to ensure the health of your data? First, managers at all levels of your organization need to be invested in a good process that ensures high-quality data. Then, even if you’re not heading to Mars, be sure to perform QA testing on your data before major launches. Above all, don’t rely on just one person in your organization to take care of your data. Make it an organizational priority.

Although one person didn’t notice the discrepancy in the measurements, it wasn’t the failure. As Gavin said, “People make mistakes. … It was our failure to look it through and find it. It is unfair to rely on one person.

Amsterdam City Council: €188m lost due to housing benefit error

In 2014, when Amsterdam used software programmed in cents instead of euros, it sent 188 million euros to poor families instead of 1.8 million euros and was forced to ask for the reimbursement of the money. extra money.

The details of the Amsterdam error are astonishing. Citizens who would have regularly received €155 instead were sent €15,500. Some even received up to €34,000! However – and equally mind-blowing – nothing in the software alerted the administrators, and no one in the city government noticed the error.

Watch for these decimal locations, warns Amsterdam City Council (Redaktion93/Shutterstock)

Data quality disasters like the Amsterdam blunder are bad for leadership and morale. When Pieter Hilhorst was appointed CFO of Amsterdam the year before the snafu, he already faced opposition due to a lack of experience. After the 2014 disaster, Hilhorst was forced to order an expensive investigation by KPMG into how the data error occurred, the Irish Times said. Due to the error and the unexpected rise and fall in income, some Amsterdam residents faced financial difficulties including debts. In the end, after all the trouble this caused, the city government had to apologize “unreservedly”, which is never a position a leader wants to be in.

To avoid a monumental mistake like the one in Amsterdam, make sure leadership invests in high-quality data early on. Instead of hiring a consultant to find out why an error occurred after the fact, experts recommend performing a premortem. In a pre-mortem, members of your organization try to detect weak points in advance putting your project online. Such “looking back” increases the ability to identify reasons for future outcomes by 30%, according to research.

Despite all the problems with the Amsterdam data disaster, there is a silver lining that restores faith in humanity. In an incredible show of responsibility on the part of Amsterdam’s most needy, all extra payments except 2.4 million Euros went back into the city’s coffers!

Data quality disasters are all too common

We’ve already looked at some spectacular disasters, but data issues are all too common in day-to-day business.

Data reliability is a major issue in all organizations, even if they don’t crash rockets on Mars. It’s no wonder, then, that data teams are reporting that data quality has risen to the top of their priority KPIs. Of course, tackling the data quality issue is a huge topic in itself with a variety of best practices and principles still emerging.

For now, the best advice for maintaining the highest quality data possible, while meeting tight deadlines and iterating quickly, is for organizations to move the reliability of their data as far “to the left” as possible. This means catching errors early in the process with proactive data quality and consistent testing. By staying ahead of these potential issues, businesses and other organizations can avoid embarrassment and financial loss resulting from data errors like these.

About the Author: Gleb Mezhanskiy is the founding CEO of data folder, a data reliability platform that helps data teams deliver trusted data products faster. He has led areas of data and product science in companies at all stages. As a founding member of the data teams at Lyft and Autodesk and product manager at Phantom Auto, Gleb has built some of the world’s largest and most sophisticated data platforms, including essential tools for data discovery. data, ETL development, forecasting and anomaly detection. Visit Datafold at www.datafold.com/and follow the company on Twitter, LinkedIn, Facebook and Youtube.

Related articles:

Data supply chain nodes

Opportunity to improve data quality, report says

Do you have reliable customer data?

Share.

Comments are closed.