SAFERENT DITCHES AI SCORING FOR TENANTS AFTER $2.3M DISCRIMINATION LAWSUIT SETTLEMENT
In significant news impacting the rental market and Artificial Intelligence (AI) sector alike, SafeRent, a popular AI tool used for tenant screenings, has decided to discontinue using AI-generated scores to evaluate potential tenants using housing vouchers. The decision came in the wake of a substantial $2.3 million settlement approved by Judge Angel Kelley. The move is expected to have reverberation into the future of rental screening methodologies and how AI is regulated within such scopes.
SafeRent's preventative decision was triggered by a class action lawsuit filed in Massachusetts last year. The suit accused SafeRent of contravening clauses within the Fair Housing Act, adding to a growing body of case law that positions AI and algorithms under the spotlight for their compliance with existing anti-discrimination laws.
The lawsuit levelled the allegation that SafeRent’s scoring system was inherently skewed, erecting hurdles for Black and Hispanic applicants, and unfairly dismissing those using housing vouchers. This was attributed to the lack of transparency and an inbuilt bias within the scoring system, a concern that has been steadily resonating with AI-powered systems.
Eyeing the gravity of the impact and need for fairness, the settlement has defined new rails for SafeRent in terms of its operation. Under the settlement's purview, SafeRent is not just barred from displaying a tenant screening score for applicants using housing vouchers, but is also restrained from showing accept or deny recommendations for their respective applications.
The $2.3 million settlement amount is earmarked to benefit Massachusetts-based rental applicants who had previously faced an uphill battle in securing housing, all thanks to SafeRent's tenant score. This class-action lawsuit, thus, has the potential of setting precedents in terms of the rights of applicants applying with housing vouchers.
The SafeRent episode adds to the growing concerns about property management software governed by an AI algorithm. This follows the Department of Justice's lawsuit against RealPage, accusing the latter of inflating the rents based on a built-in algorithmic pricing software.
As we stride further into a future where AI is increasingly penetrating our everyday lives, the SafeRent case underscores the need for urgent introspection and tighter regulations to curtail any inadvertent discrimination caused by these algorithms. Consequently, this potentially rings the death knell for AI-generated scores in tenant screenings, forcing the rental market to pivot to more equitable and transparent methods.
In conclusion, this is a stark reminder of the inherent volatilities of emerging technologies like AI, and how these technologies must continually nuance and adapt themselves to stand the test of legal and ethical scrutiny. The SafeRent saga may well prove a catalyst for bringing about greater transparency in AI-driven services and operational industries, spelling a more inclusive, fair, and non-discriminatory future for all. SafeRent's episode could be a vital stepping stone towards this future.