<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://lms.onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=Data_Science_Strategy%3A_Explainabily_di_AI</id>
	<title>Data Science Strategy: Explainabily di AI - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://lms.onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=Data_Science_Strategy%3A_Explainabily_di_AI"/>
	<link rel="alternate" type="text/html" href="https://lms.onnocenter.or.id/wiki/index.php?title=Data_Science_Strategy:_Explainabily_di_AI&amp;action=history"/>
	<updated>2026-04-19T23:31:46Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.1</generator>
	<entry>
		<id>https://lms.onnocenter.or.id/wiki/index.php?title=Data_Science_Strategy:_Explainabily_di_AI&amp;diff=63140&amp;oldid=prev</id>
		<title>Onnowpurbo: Created page with &quot;Securing Explainability in AI Explainable AI (XAI), also referred to as Transparent AI, involves the ability to explain how an algorithm has reached a particular insight or co...&quot;</title>
		<link rel="alternate" type="text/html" href="https://lms.onnocenter.or.id/wiki/index.php?title=Data_Science_Strategy:_Explainabily_di_AI&amp;diff=63140&amp;oldid=prev"/>
		<updated>2021-04-07T02:46:29Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Securing Explainability in AI Explainable AI (XAI), also referred to as Transparent AI, involves the ability to explain how an algorithm has reached a particular insight or co...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Securing Explainability in AI&lt;br /&gt;
Explainable AI (XAI), also referred to as Transparent AI, involves the ability to&lt;br /&gt;
explain how an algorithm has reached a particular insight or conclusion that&lt;br /&gt;
results in a certain decision to take action. Though an important aspect to con-&lt;br /&gt;
sider as part of the evolution of AI, it isn’t easy to solve technically, especially if&lt;br /&gt;
the AI is acting in real-time and thus using streaming data that hasn’t been&lt;br /&gt;
stored. To bring this point home, imagine, if you will, that you cannot explain to&lt;br /&gt;
your customer why the machine made a certain decision — a decision you would&lt;br /&gt;
not have made based on your own experience. What do you tell the customer then?&lt;br /&gt;
Addressing explainable AI is becoming increasingly important in terms of our&lt;br /&gt;
human ability to understand more about why and how the AI is performing in a&lt;br /&gt;
certain way. In other words, what can be understood by studying how the machine&lt;br /&gt;
is learning by processing these huge amounts of data from many dimensions,&lt;br /&gt;
looking for certain patterns or deviations? What is it that the machine detects and&lt;br /&gt;
understands that you missed or interpreted differently or simply were not capable&lt;br /&gt;
of detecting? Which conclusions can be drawn from that?&lt;br /&gt;
CHAPTER 3 Dealing with Difficult Challenges&lt;br /&gt;
45Ethically, AI explainability will be even more important when data scientists start&lt;br /&gt;
building more advanced artificial intelligence, where many different algorithms&lt;br /&gt;
are working together. It will be the key to understanding exactly what machines&lt;br /&gt;
interpret as well as how the machine’s decision-making process is carried out.&lt;br /&gt;
Knowing this information is crucial to staying on top of the policy framework&lt;br /&gt;
needed to set the boundaries for what the machine shall and shall not do, as well&lt;br /&gt;
as how these policies need to be expanded, or perhaps restricted, going forward.&lt;br /&gt;
From a purely existential perspective on one hand and the need for humans to&lt;br /&gt;
remain in control of the intelligent machines that are being built on the other, you&lt;br /&gt;
cannot simply view AI as black box. (The black box challenge in AI refers to the&lt;br /&gt;
need to ensure that, when an algorithm takes a decision based on the techniques&lt;br /&gt;
that have been used to train the algorithm, that decision-making process must be&lt;br /&gt;
transparent to humans. Algorithm transparency is possible when many of the&lt;br /&gt;
more basic ML techniques — supervised learning, for example — are being used,&lt;br /&gt;
but so far nobody has yet found a way to gain transparency when it comes to algo-&lt;br /&gt;
rithms based on deep learning techniques. For example, there must be a way to&lt;br /&gt;
explain why a certain decision was taken when something went wrong. A perti-&lt;br /&gt;
nent example is the self-driving car, where a bunch of algorithms are in play,&lt;br /&gt;
working together and (hopefully) following policies predefined for how to act in&lt;br /&gt;
certain circumstances. All works according to plan, but then a totally unknown&lt;br /&gt;
and unexpected event occurs and the car takes an unexpected action that causes&lt;br /&gt;
an accident. In such situations, people in general would naturally expect that&lt;br /&gt;
there would be some way to extract information from the self-driving car on why&lt;br /&gt;
this specific decision was made — hence, they expect explainability in AI.&lt;br /&gt;
Apart from the technical, ethical, and existential reasons for ensuring the explain-&lt;br /&gt;
ability of AI, there is now also a legal reason. The EU’s General Data Protection&lt;br /&gt;
Regulation (GDPR) has a clause that requests algorithmic interpretability. Right&lt;br /&gt;
now, these demands aren’t too strict, but over time this will likely change dra-&lt;br /&gt;
matically. The GDPR request now requires the ability to explain how the algorithm&lt;br /&gt;
functions based on the following questions:&lt;br /&gt;
» » Which data is used?&lt;br /&gt;
» » Which logic is used in the algorithm?&lt;br /&gt;
» » What process is used?&lt;br /&gt;
» » What is the impact of the decision made by the algorithm?&lt;br /&gt;
46&lt;br /&gt;
PAR&lt;/div&gt;</summary>
		<author><name>Onnowpurbo</name></author>
	</entry>
</feed>