In New York City, three things are certain: death, taxes, and sky-high rent.
There may not be much AI can do for death. Taxes, I’m not sure. But it turns out this technology can be surprisingly helpful if you have a unique rent situation.
I live in a former superintendent-occupied apartment in a prewar co-op in Manhattan. Although I signed a market-rate lease when I moved in, I later learned — with the help of AI tools — that my apartment might qualify as a “rent-stabilized” unit.
New York is one of a handful of cities where rent increases can be limited by law, depending on the circumstances. This is known as rent stabilization. It’s a milder form of rent regulation that lets landlords charge more, but not too much more.
My apartment’s rent history showed the unit had been registered as rent-stabilized in the 1980s and contained no obvious record of a deregulation event. Did that mean the apartment might still be regulated? Could I gain protection from potentially large rent increases in the future?
I couldn’t find clear answers on Reddit or Google, so I uploaded a photo of the apartment’s rent-registration history and asked ChatGPT to help me interpret it.
ChatGPT walked me through the document and said this type of situation is routinely evaluated by the New York State Division of Housing and Community Renewal, which oversees rent-stabilization and related rules in NYC.
I told ChatGPT to show me its sources. It cited relevant provisions of the rent-stabilization code and several court decisions. AI can sometimes hallucinate, so I asked Perplexity and Google Gemini the same questions, as an initial way to double-check the facts. These AI tools reached similar high-level conclusions but differed in their reasoning and the way they cited authoritative sources. Gemini and Perplexity were better at showing their work than ChatGPT, and Gemini tended to be the most conservative in its framing, which made me trust it more.
I still caught all three bots making mistakes. In one instance, Gemini cited a legal case that didn’t exist. When I called it out, it quickly corrected itself. I repeated this process multiple times, reading the underlying legal provisions and pushing back on AI-powered conclusions I thought were wrong. I even role-played as a landlord, presenting arguments rental property owners might make about why my apartment is not rent-stabilized.
Eventually, I hit a wall. No matter how much I challenged the bots, they were confident that my situation raised legitimate questions that New York’s DHCR might resolve. That’s when I called a housing attorney — which cost $35 through the New York Bar Association’s referral service — to sanity-check everything. After reviewing the documents, he agreed that filing a rent-overcharge complaint was reasonable. So I went ahead. The proceeding is ongoing, and no determination has been made yet.
My case is pretty unusual. But there are broader takeaways:
- AI chatbots can help you understand complex documents
- This meaningfully lowers the barrier for people trying to understand their rights
- You must still be diligent and check AI outputs. That includes checking that cases and other sources being cited are real.
- Don’t make important final decisions based only on AI answers
- These tools can help you identify the right questions to ask, and decide when it’s time to consult a human expert.
Sign up for BI’s Tech Memo newsletter here. Reach out to me via email at abarr@businessinsider.com.

