Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

  • EldritchFeminity@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    I would argue that there’s plenty to distrust about it, because its accuracy leaves much to be desired (to the point where it completely makes things up fairly regularly) and because it is inherently vulnerable to biases due to the data fed to it.

    Early facial recognition tech had trouble identifying between different faces of black people, people below a certain age, and women, and nobody could figure out why. Until they stepped back and took a look at the demographics of the employees of these companies. They were mostly middle-aged and older white men, and those were the people whose faces they used as the data sets for the in-house development/testing of the tech. We’ve already seen similar biases in image prompt generators where they show a preference for thin white women as being considered what an attractive woman is.

    Plus, there’s the data degradation issue. Supposedly, ChatGPT isn’t fed any data from the internet at large past 2021 because the amount of AI generated content past then causes a self perpuating decline in quality.