Activity

  • Sankara posted in the group Sci & Tech

    11 months ago
    Executives from top artificial intelligence companies have warned that the technology they are building should be considered a societal risk on a par with “pandemics and nuclear wars,” according to a 22-word statement from the Center for AI Safety that was signed by more than 350 executives, researchers and engineers.

    The statement comes at a time of growing concern about the potential harms of A.I. Advancements in large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

    Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down. Those fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building poses grave risks and should be regulated more tightly.

    Concerns: Some argue that A.I. is improving so rapidly that it has already surpassed human performance in some areas, and that it will soon surpass it in others.

    – The New York Times

    5
    Social Share
    +2 liked this