PG TEST: Difference between revisions

From Books
Jump to navigation Jump to search
No edit summary
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 3: Line 3:
[https://books.pragmagenesis.net/index.php?title=PG_TEST&action=purge PB_TEST purged]
[https://books.pragmagenesis.net/index.php?title=PG_TEST&action=purge PB_TEST purged]


<div class="pg-observed-action"></div>


<div class="pg-periodic-system-of-actions" >
<div class="pg-periodic-system-of-actions" >
<div class="pg-observed-action">vv</div>
<div class="pg-periodic-system-column" > {{PG_PHYSIOSPHERE_LOWER_HIERARCHY}} </div>
<div class="pg-periodic-system-column" > {{PG_PHYSIOSPHERE_LOWER_HIERARCHY}} </div>
<div class="pg-periodic-system-column" > {{PG_PHYSIOSPHERE_MIDDLE_HIERARCHY}} </div>
<div class="pg-periodic-system-column" > {{PG_PHYSIOSPHERE_MIDDLE_HIERARCHY}} </div>
Line 25: Line 26:
[[PG_TEST2]]
[[PG_TEST2]]
[[PG_TEST3]]
[[PG_TEST3]]
<table border="1px solid black">
  <tr>
    <th>Month</th>
    <th>Savings</th>
  </tr>
  <tr>
    <td>January</td>
    <td>$100</td>
  </tr>
</table>


[[:Category:PG_CAT3]]
[[:Category:PG_CAT3]]

Latest revision as of 20:42, 20 February 2023

Kant geht davon aus, dass man die Dinge nicht so sieht, wie sie „an sich“ eigentlich sind. Die Dinge an sich existieren unabhängig von menschlicher Erkenntnis und sind somit auch kein Gegenstand der Erfahrung: „Wir können nur Erscheinungen erkennen, keine Dinge an sich“. PG_TEST3#Third Section

PB_TEST purged


Template:PG_CYTOSPHERE_MIDDLE_HIERARCHY

Template:PG_CYTOSPHERE_LOWER_HIERARCHY

Template:PG_PHYSIOSPHERE_MIDDLE_HIERARCHY

Template:PG_PHYSIOSPHERE_LOWER_HIERARCHY

Template:HOLOSPHERE_LOWER_HIERARCHY

Template:PGPS_Cell

PG_TEST2 PG_TEST3


Month Savings
January $100

Category:PG_CAT3

Page name is 'PG TEST'

Go to subpage PG TEST/subPage1

PG TEST/and Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

PG TEST/and Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It/de

First Section

Caption text
Header text Header text Header text
Example PG_ATPOM Tttt
Example Tttt4 Example
Example || Example



Parameter1 Note = 'This is a note.' category1 = 'cat_ok' condition1 'yes'
You entered text in variable category1

Button text


Start

Toggle all Spoiler
Troll div in middle
Spoiler 1

¡HI! I am a spoiler

Spoiler 2

¡HI! I am a spoiler


Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

XX Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

3.Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

XX 4. Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

5.Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.

XX 6.Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It

J. Mark Bishop

Frontiers in Psychology 11 (2021) Copy BIBTEX

Abstract

Artificial Neural Networks have reached “grandmaster” and even “super-human” performance across a variety of games, from those involving perfect information, such as Go, to those involving imperfect information, such as “Starcraft”. Such technological developments from artificial intelligence (AI) labs have ushered concomitant applications across the world of business, where an “AI” brand-tag is quickly becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong—an autonomous vehicle crashes, a chatbot exhibits “racist” behavior, automated credit-scoring processes “discriminate” on gender, etc.—there are often significant financial, legal, and brand consequences, and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for theNew York Times: “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.” In this paper, foregrounding what in 1949 Gilbert Ryle termed “a category mistake”, I will offer an alternative explanation for AI errors; it is not so much that AI machinery cannot “grasp” causality, but that AI machinery (qua computation) cannot understand anything at all.