Ho appena letto questo articolo che mi pare affronti il tema dei bias cognitivi da AI, anche se limitatamente al campo dell'intelligence.

Anche i rimandi mi paiono altrettanto interessanti.

Peccato non consideri il problema che dati di qualità saranno sempre più rari per via dell'inquinamento dell'ecosistema da parte dei LLM generativi.

<https://warontherocks.com/2024/10/ai-and-intelligence-analysis-panacea-or-peril/>

[...] Processing a growing amount of information requires the intelligence analyst to comb through, identify, and synthesize disparate data points into a judgment — which, when done well, reduces uncertainty. However, cognitive biases <https://www.ialeia.org/docs/Psychology_of_Intelligence_Analysis.pdf> coupled with the problem of too much data or poor data quality plague this process, leading to imprecise assessments that could contribute to policy and decision-making failures, increased risks to military operations, and other disadvantageous and cascading outcomes. Given the challenging nature of intelligence analysis, could AI help avoid these consequences and provide decision-makers with crucial, objective, and accurate information?

If the promise of AI <https://mitpress.mit.edu/9780262043045/the-promise-of-artificial-intelligence/> holds true, then generative AI technologies <https://cset.georgetown.edu/article/what-are-generative-ai-large-language-models-and-foundation-models/> such as ChatGPT, which are based upon large language models, can add efficiencies to the analysis process. For example, generative AI could summarize lengthy texts (e.g., foreign grey literature <https://books.google.com/books/about/Information_Sources_in_Grey_Literature.html?id=30ZtV2VHMY8C>), translate foreign languages, conduct open-source sentiment analysis, and perform various other functions. Moreover, generative AI could assist in the development of intelligence assessments. This does not alleviate human intelligence analysts of their pivotal function. Still, generative AI could serve as an adjunct to the analysis process, aiding in identifying analytical flaws or inconsistencies.

While these are promising functions, and it is reasonable to assume that intelligence agencies have already incorporated such technologies into their everyday processes, generative AI is not without its faults. First, generative AI does little to alleviate the perennial problem of analytical bias. Generative AI technologies constructed on large language models rely upon preexisting data sets, which are inherently unstructured and potentially flawed. Linked to this point, today’s generative AI models are prone to mistakes and can provide false or inaccurate content. These “hallucinations <https://research.google/pubs/hallucinations-in-neural-machine-translation/>” relate to the development of generative AI models; despite training using a large corpus of data, if the generative AI system encounters an unfamiliar word, phrase, or topic — or if the data is insufficient — it will make an inference based upon its understanding of language and will give an answer that the system deems logical, but which could be erroneous.

Second, the information needed to determine an adversary’s capabilities and intentions is no longer solely the purview of governments <https://www.foreignaffairs.com/world/open-secrets-ukraine-intelligence-revolution-amy-zegart>. Non-governmental organizations, private entities, social media companies, and others have emerged as important data brokers <https://epic.org/issues/consumer-privacy/data-brokers/> possessing the information required to understand the strategic environment and to construct accurate intelligence assessments. The use of generative AI in intelligence analysis needs to address the associated underlying issues of data access, quality, and bias.

[...]

Data analysis and natural language processing represent just a sampling of generative AI’s applications to intelligence operations. Indeed, the promise of AI could yield manifold benefits in the field of intelligence analysis beyond these two functions. However, AI is not without issue. It is vitally important to highlight that the core functionality of generative AI derives from the data employed to train the model. If the dataset contains bias, the model will continue to promulgate and perhaps even amplify those biases. Thus, we return to the perennial problem of negative mental models impacting the analysis that could potentially feed generative AI systems. The primary consequence of leveraging pre-existing intelligence datasets is the unknown implication of biases contained in the finished analytical products. The injection of such datasets could continue the diffusion of skewed analysis, creating a cyclical process that exponentially adds to the compendium of imprecise and possibly dubious intelligence products.

The potential of generative AI systems to provide misleading outputs, or hallucinations, formed from incomplete or inaccurate data is a common problem and an inherent limitation of today’s AI technology. Generative AI systems assess the next word, phrase, image, or other outputs in a combination based on observable patterns in the training data. In the absence of data or the presence of extraneous data, generative AI will deduce the most likely sequence of content, which may contain falsehoods or simply bogus information. As such, human knowledge, experience, expertise, and intuition will continue to remain the vital components of intelligence tradecraft until this technology matures.

It follows that quality data is essential for using generative AI for intelligence analysis purposes. Perhaps just as important is acquiring the data. Data is certainly a commodity <https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data>: a lucrative product for purchase, sale, or collection. Though intelligence organizations expend perhaps a disproportionate amount of their funding on sophisticated intelligence collection capabilities, which acquire highly classified material, with the proliferation of publicly available or open-source information, governments no longer possess a monopoly on data. Data in the private sector can prove just as valuable, if not more so, than data collected from highly technical means. Therefore, an intelligence organization should pursue the acquisition of such data. However, several challenges arise when a government attempts to acquire data from the private sector, which include trust issues, proprietary concerns, and compatibility problems.

[...]


On 11/10/24 10:39, Stefano Quintarelli via nexa wrote:
ciao a tutti
esiste qualcuno che abbia scritto qualcosa sul bias da soluzionismo tecnologico ?

in particolare riferito all'AI, per cui questa meravigliosa tecnologia e' in grado di fornire la risposta giusta a qualunque problema ?

ciao, s.

Reply via email to