Same Cart, Different Power: What Instacart’s AI Pricing Experiments Really Tell Us

Instacart wants this to sound harmless: a few tiny “tests,” a little experimentation, all in the name of “learning what matters most to consumers.”

Consumer Reports and Groundwork Collaborative just showed us what is actually happening. Their joint investigation reveals an AI pricing system that quietly learns where it can raise prices without losing you. Groundwork Collaborative and Consumer Reports

That should worry anyone who buys groceries. Inside a country shaped by racialized geography and food deserts, it is not just a pricing story. It is a power story.

Executive summary

  • Instacart is running AI price experiments at scale. Shoppers buying the same items, from the same store, at the same time, were charged different prices, with some items showing up to five price points and baskets differing by about 7 percent. Groundwork Collaborative+2Consumer Reports+2

  • The mechanism is price elasticity. Instacart’s tools are built to see where prices can rise without driving shoppers away, then lock those higher prices in. Groundwork Collaborative+1

  • The study proves the mechanism but not the full equity impact. Within its non-representative sample, researchers did not find statistically meaningful differences by race, income or age in who got higher prices, and they are explicit about that. Groundwork Collaborative+1

  • The structural risk is still obvious. In food deserts and low-income, low-access neighborhoods, an elasticity engine will “learn” that people with the least alternatives tolerate higher prices, even if no one ever feeds race into the model. Economic Research Service+2Economic Research Service+2

The report gives us receipts. The rest is the part we need to say out loud.

What the investigation actually found

Groundwork Collaborative, Consumer Reports, and More Perfect Union ran a live test of Instacart’s pricing system in September 2025. Groundwork Collaborative and Consumer Reports

Key facts:

  • 437 shoppers participated across four cities: Seattle, Washington, DC, Saint Paul, and North Canton.

  • In the main tests, participants bought identical baskets of 18–20 grocery items from the same stores (primarily Safeway and Target) at the same time, using Instacart pickup.

  • Researchers captured screenshots and receipts, then compared item prices and basket totals across shoppers.

The patterns were not subtle:

  • 74% of items in the main tests were offered at more than one price to different shoppers at the same store and time.

  • A single product could show up to five different prices simultaneously. One example: a dozen Lucerne eggs listed at five distinct price points in the same Washington, DC Safeway. The Independent

  • The gap between the lowest and highest price for an item averaged about 13% , and reached 23% for some staples.

  • For full baskets, totals differed by about 7% on average, with some shoppers paying roughly ten dollars more than others for the same list.

Using Instacart’s own estimate of annual grocery spending for a family of four, the report calculates that landing regularly in the higher-priced group could quietly cost a household around $1,200 a year.

A smaller follow-up test across 15 retailers (including Albertsons, Costco, Kroger, Safeway and Sprouts) found similar patterns of AI-driven price variation.

There is no way to wave this away as random internet noise. This is a pricing model in production.

What Instacart says it is doing

Instacart’s public response focuses on two main claims.

1. “Limited, randomized tests” rather than personalized gouging

Instacart:

  • Acknowledges that pricing tests are happening, describing them as “limited, short-term, randomized tests” that help retail partners “learn what matters most to consumers and how to keep essential items affordable.” Consumer Reports

  • States that “most customers see standard prices”, and insists these tests are not real-time surge pricing. Consumer Reports

  • Emphasizes that the tests examined in this study did not use personal or demographic data such as income or shopping history to assign prices. Consumer Reports

2. “Retailers set prices, we just provide tools”

Instacart also stresses that retailers control prices, and that its platform simply passes through those choices and helps synchronize online and in-store pricing where possible. Consumer Reports

The other side of the story is how Instacart markets its AI pricing capabilities to those same retailers:

  • Instacart acquired Eversight in 2022, a firm that sells AI-driven price and promotion optimization tools that run constant tests, learn price elasticity, and promise higher margins. New York Post

  • Eversight-style tools are pitched as a way to “optimize pricing” and “continuously drive growth” by targeting offers based on shopper behavior, location, competition, and more. New York Post

The company also acknowledges that it has the capability to incorporate demographic and household data in its systems, even if it says this particular round of tests did not use them. Federal Trade Commission

So, at minimum, we are dealing with:

  • A platform that runs AI price experiments at scale.

  • A toolset explicitly designed to push prices up where it can.

  • A corporate promise that it is not using the full depth of data it already has.

That is not a comfort. That is a risk statement.

What the study gets exactly right

As someone who sits at the intersection of retail, AI, and organizational design, I want to be clear: this investigation is strong.

It nails several things that most corporate “AI pricing” pieces never touch.

1. It proves the phenomenon, not just the suspicion

The team moves the conversation from “we suspect dynamic pricing is happening” to “here is direct evidence that shoppers are being sorted into different price tiers for the same groceries, at the same moment.” Groundwork Collaborative and Consumer Reports

2. It translates numbers into household stakes

The move to express the average 7% basket spread as roughly $1,200 a year for a family of four makes the harm legible. In a year when the CPI for food at home is still rising about 2.7 percent over twelve months, another hundred dollars a month in invisible price experiments is not a rounding error. Bureau of Labor Statistics

3. It connects to a broader regulatory moment

The report situates Instacart’s practices inside the wider “surveillance pricing” ecosystem:

  • The Federal Trade Commission has launched a 6(b) study into how firms use detailed personal data to set individualized prices, and its interim findings describe a wide range of behavioral and location data being used to target offers. Federal Trade Commission

  • New York’s Algorithmic Pricing Disclosure Act now requires companies doing business in the state to disclose when they use “personalized algorithmic pricing” that relies on personal data. Jones Day Alston & Bird Data Strategy Blog

The Instacart case is not an outlier. It is an example.

4. It is honest about what it did not find

Methodologically, the researchers:

  • Collected demographic information on participants, including race, income, age, and gender.

  • Tested whether those variables, along with Instacart usage history, predicted who ended up with higher basket totals.

  • Concluded that within their sample, differences linked to these demographics were small and not statistically meaningful. Groundwork Collaborative

That restraint matters. They do not overclaim. They say exactly what their data supports.

Where the study falls short, by design

The limitations here are about scope. The authors themselves flag crucial constraints.

You can think about the gaps in three buckets.

1. Who was in the sample

Participants were mostly:

  • Consumer Reports members and volunteers.

  • People recruited through CR mailing lists and More Perfect Union’s community.

  • Shoppers comfortable using Zoom and following complex instructions in real time. Groundwork Collaborative

This is not a random cross-section of Instacart users. It likely skews toward higher digital literacy and more stable access to devices and payment methods.

2. Where the tests ran

The main experiments focused on:

  • Three Safeway stores and two Target stores.

  • Four cities that, while diverse, do not represent the full geography of low-income communities, rural areas, and entrenched food deserts. Groundwork Collaborative and Consumer Reports

There is a follow-up test across 15 retailers, but it is not the core of the statistical analysis. Groundwork Collaborative

3. What the design can see

The experiment captures:

  • Short-term behavior, in a controlled moment.

  • One basket per person per test, not months of shopping patterns.

That is exactly what you need to prove that AI pricing experiments exist and move money. It is not what you need to detect chronic, structural patterns in specific zip codes over time.

So when the report says “we did not find statistically significant demographic differences in this sample,” that does not mean there are no structural harms. It means:

With this group of mostly CR-connected shoppers, in these locations, at this time, we cannot prove that certain demographics already pay more.

That is a very different statement from “everyone is treated fairly.”

How price elasticity behaves in a food desert

Now add context the study does not fully model.

The USDA’s Food Access Research Atlas identifies census tracts that are both low income and low access, based on distance to supermarkets, income levels, and vehicle access. Economic Research Service

Broadly, research using that data finds:

  • Low-income and majority-Black neighborhoods are more likely to be classified as low income and low access. Economic Research Service

  • Residents in these areas often travel farther for full-service groceries and have less reliable transportation. Economic Research Service

Now imagine an AI pricing engine dropped into that landscape.

Price optimization answers a simple question:
How far can I raise the price before behavior changes?

In a dense, affluent area with multiple competitors, small price increases often prompt immediate reactions. People switch stores, swap brands, or abandon carts. The algorithm learns that these customers are “price sensitive.”

In a low-income, low-access tract:

  • There are fewer alternative stores.

  • Transit is fragile or time-consuming.

  • Work schedules and caregiving leave less flexibility.

The same experiment might show very little defection when prices rise. From the system’s point of view, demand looks “sticky.” This is read as a safe place to hold higher prices. Economic Research Service

The model does not need to know that these are poor Black families or other marginalized communities. It only needs to observe that:

  • In these locations, with these store footprints and these buying patterns, people keep paying at higher price points.

Location, retailer mix, and basket patterns become proxies for race and income, even if no one ever feeds race into the model explicitly. Federal Trade Commission

That is how you get de facto discrimination from a system that calls itself neutral.

Why “random tests” are not a moral shield

Instacart emphasizes that its tests are randomized and that this particular set did not use “customer characteristics” like income or shopping history. Consumer Reports

Randomization is reassuring in a statistics class. It does not absolve you in a distorted market.

If your “random” price tests are:

  • Run largely with retailers that already apply online markups.

  • Concentrated in areas where competition is thin.

  • Layered onto populations whose movement and choices are constrained by design.

Then your randomization sits on top of a world that is deeply non-random.

In that context, the tool does exactly what it was bought to do:

  • It discovers where prices can increase without losing volume.

  • It carries those findings forward into ongoing pricing strategies.

  • It delivers the promised lift in profit margins. New York Post

From a spreadsheet, this looks efficient.
From the perspective of someone standing outside the only full-service grocery store for miles, it looks like extraction dressed as convenience.

The harm is not that an Instacart employee wakes up and says, “Charge poor Black neighborhoods more.” The harm is that no one designs a system that will refuse to learn that lesson when the data points that way.

The harm comes in refusing to learn the lessons of Revionics and the AI pricing tools that have come before this.

Regulators are moving, but disclosure alone is not enough

We are at the beginning of a policy response.

  • New York’s Algorithmic Pricing Disclosure Act requires businesses using “personalized algorithmic pricing” to clearly disclose when personal data is used to set prices, with penalties for non-compliance. A recent challenge from the National Retail Federation was dismissed, so the law stands. Jones Day Alston & Bird Data Strategy Blog

  • The FTC’s surveillance pricing study is documenting how firms use location, browsing history, and other sensitive data to individualize prices and is examining impacts on privacy, competition, and consumer protection. Federal Trade Commission

These steps matter. They create visibility and signal that the old “one price for everyone” norm is being replaced by something far more opaque.

But disclosure is a floor, not a ceiling.

A banner that says “an algorithm using your personal data may have set this price” is better than nothing. It does not help you if:

  • You still have no practical alternative to the platform.

  • You lack time and bandwidth to decode the pricing game.

  • The underlying models keep learning from structural inequities that disclosure does not touch.

In neighborhoods that have been treated as optional for decades, a warning label is not protection. It is a confession taped to the front door.

What real guardrails would look like:

If Instacart, retailers, and regulators were serious about equity in AI pricing, the guardrails would not be vibes and press quotes. They would be designed, testable constraints.

At minimum:

1. No upward price experiments in low-income, low-access tracts

Use the USDA Food Access Research Atlas to identify census tracts that are both low income and low access. Prohibit experiments that increase effective prices on essential groceries in those tracts. If you want to test there, only allow price reductions. Economic Research Service

2. Hard anti-regressivity constraints in the optimization logic

Bake into the model an explicit rule: if a geography is flagged as low income or low access, any uplift that the algorithm “discovers” must be capped below a baseline or discarded entirely.

3. Independent audits centered on race and geography

Require regular third-party audits that:

  • Compare price patterns across zip codes, controlling for retailer, product and time.

  • Focus specifically on majority-Black, low-income, and low-access neighborhoods.

  • Treat persistent premiums on staples in these areas as indicators of harm, not acceptable variance.

4. Transparency that is actually usable

If you are going to disclose algorithmic pricing:

  • Show shoppers how Instacart prices compare to in-store shelf prices for the same items.

  • Provide simple ways to opt out of experiments without losing access to the service.

  • Make the default understandable to someone who is tired, busy and food insecure.

5. A design posture of harm reduction, not margin maximization

AI pricing could be used to route discounts to people with the least access and the highest need. The choice to invest instead in extracting a few extra points of margin from already stressed households is not neutral. It is a value statement.

The cold truth:

The Groundwork and Consumer Reports investigation does something essential: it proves that the price you see on Instacart is not simply “the price.” It is a version of the price, assigned to you by a system that is paid to find your pain threshold. Groundwork Collaborative and Consumer Reports

Inside their sample, the researchers do not find evidence that poor or Black shoppers are already being singled out by this specific experiment. They say that openly. That honesty is why their work should be trusted.

But if you connect their findings to what we already know about food deserts, low-income neighborhoods, and the way AI pricing tools are designed and sold, the risk is not hypothetical.

If you take:

  • A system explicitly built to raise prices until people push back.

  • A platform with the capability to use extremely detailed personal and behavioral data.

  • A grocery landscape where poor Black neighborhoods and other communities of color are more likely to be low income and low access. Economic Research Service

And you do not install hard guardrails, you know where this ends.

The harm will not be accidental.
The collapse will not be personal.
The injury will be exactly what the system was allowed to learn.

References

  1. Groundwork Collaborative. Same Cart, Different Price: Instacart’s Price Experiments Cost Families at Checkout. Report and summary, December 2025. Groundwork Collaborative

  2. Consumer Reports. Instacart’s AI Pricing May Be Inflating Your Grocery Bill. Article and media room summary, December 2025. Consumer Reports

  3. CBS News. Instacart’s AI-enabled pricing may bump up your grocery costs by as much as 23%, study says. December 9, 2025. CBS News

  4. USDA Economic Research Service. Food Access Research Atlas and Low-Income and Low-Foodstore-Access Census Tracts, 2015 Update. Data.gov Economic Research Servic

  5. U.S. Bureau of Labor Statistics. Consumer Price Index, September 2025: Food at Home. Bureau of Labor Statistics

  6. Federal Trade Commission. FTC Issues Orders to Eight Companies Seeking Information on Surveillance Pricing and FTC Surveillance Pricing Study Indicates Wide Range of Personal Data Used to Set Individualized Consumer Prices. Hunton Andrews Kurth+3Federal Trade Commission+3Federal Trade Commission

  7. State of New York. Algorithmic Pricing Disclosure Act and related analyses. Jones Day+6Alston & Bird Data Strategy Blog

  8. News coverage of Instacart’s response and Eversight’s role, including ABC News, CNN, and other outlets. ABC News

Previous
Previous

The Human Harm Layer: When Organizational AI Turns Outward to The Shopper

Next
Next

When Optics Override People