Додаткові джерела у вигляді статей на російській та англійській мовах.

ДЕРЖАВНИЙ ЕКОНОМІКО ТЕХНОЛОГІЧНИЙ УНІВЕРСИТЕТ

ТРАНСПОРТУ

Факультет“Інфрастуктура та рухомий склад залізниць”

 

Кафедра “Автоматизація та комп’ютерно-інтегровані технології транспорту

КОНТРОЛЬНІ РОБОТИ

ПО ДИСЦИПЛІНІ

ТЕОРІЯ АВТОМАТИЧНОГО УПРАВЛІННЯ ТА

ШТУЧНОГО ІНТЕЛЕКТУ

Київ -2015

КОНТРОЛЬНІ РОБОТИ ПО ТЕОРІЇ АВТОМАТИЧНОГО УПРАВЛІННЯ ТА ШТУЧНОГО ІНТЕЛЕКТУ

Розробник:Мараховський Л.Ф. – д.т.н., професор

Джерело:www.trinitas.ru - список авторов

Никитин Андрей Викторович

 

1. Никитин А.В. На пути к Машинному Разуму. Круг третий. (Части 1,2) // «Академия Тринитаризма», М., Эл № 77-6567, публ.12887, 31.01.2006

2. Никитин А.В. На пути к Машинному Разуму. Круг третий. (Часть 3) // «Академия Тринитаризма», М., Эл № 77-6567, публ.12907, 03.02.2006

3. Никитин А.В. На пути к машинному разуму. Круг третий. (Часть 4) // «Академия Тринитаризма», М., Эл № 77-6567, публ.12914, 06.02.2006

4. Никитин А.В. На пути к машинному разуму. Круг третий. (Часть 5) // «Академия Тринитаризма», М., Эл № 77-6567, публ.12928, 08.02.2006

5. Никитин А.В., Логика автономных систем // «Академия Тринитаризма», М., Эл № 77-6567, публ.15858, 28.03.2010

6. Никитин А.В., Логика управления клетки // «Академия Тринитаризма», М., Эл № 77-6567, публ.17037, 29.11.2011

7. Никитин А.В., Где Логика…? // «Академия Тринитаризма», М., Эл № 77-6567, публ.18075, 19.06.2013

8. Никитин А.В., Где-то на пути к пониманию… // «Академия Тринитаризма», М., Эл № 77-6567, публ.18092, 07.07.2013

9. Никитин А.В., Немного о мемристоре… // «Академия Тринитаризма», М., Эл № 77-6567, публ.19539, 12.09.2014

10. Никитин А.В., Искусственный нейрон // «Академия Тринитаризма», М., Эл № 77-6567, публ.20230, 20.02.2015

11. Никитин А.В., Общая логика. Теория связей // «Академия Тринитаризма», М., Эл № 77-6567, публ.20544, 04.05.2015

12. Никитин А.В., Общая логика. Эволюция мышления // «Академия Тринитаризма», М., Эл № 77-6567, публ.20747, 18.06.2015

 

Задача по контрольній роботі:Треба зробити звіт по одному із джерел по вказівки викладача та виступити з ним на семінарі кафедри АКІТТ.

Тема роботи:Штучний интелект

Структура контрольній роботи

Титульний лист

Введення

Зміст роботи

Висновки

Презентація звіту

Об’єм контрольнійроботи 10–15 сторінок.

Приклад оформлення титульної сторінки контрольної роботи розглянутий низ ще.

 


Приклад оформлення титульної сторінки контрольної роботи

 

 

 
 
ДЕРЖАВНИЙ ЕКОНОМІКО ТЕХНОЛОГІЧНИЙ УНІВЕРСИТЕТ ТРАНСПОРТУ Факультет“Інфрастуктура та рухомий склад залізниць”   Кафедра “Автоматизація та комп’ютерно-інтегровані технології транспорту”   ЛАБОРАТОРНА РОБОТА з дисципліни " ТЕОРІЯ АВТОМАТИЧНОГО УПРАВЛІННЯ ТА ШТУЧНОГО ІНТЕЛЕКТУ" .. Назва теми роботи: Виконав: студент 5-го курсу гр. КІСІ-5 _________________ (прізвище, ініціали) _________________ (підпис) Перевірив: д.т.н. професор Мараховський Л.Ф. ________________ (підпис)   «___» _______20___р. Київ - 20__ р.  

 

 


Додаткові джерела у вигляді статей на російській та англійській мовах.

A Digital Neurosynaptic Core Using Event-Driven

QDI Circuits

Nabil Imam1,2,3, Filipp Akopyan2, John Arthur2, Paul Merolla2, Rajit Manohar1, Dharmendra S Modha2

1Cornell University, Ithaca, NY

2IBM Research - Almaden, San Jose, CA

3ni49@cornell.edu

Abstract—We design and implement a key building block

of a scalable neuromorphic architecture capable of running

spiking neural networks in compact and low-power hardware.

Our innovation is a configurable neurosynaptic core that

combines 256 integrate-and-fire neurons, 1024 input axons,

and 1024x256 synapses in 4.2mm2 of silicon using a 45nm SOI

process. We are able to achieve ultra-low energy consumption

1) at the circuit-level by using an asynchronous design where

circuits only switch while performing neural updates; 2) at

the core-level by implementing a 256 neural fanout in a single

operation using a crossbar memory; and 3) at the architecturelevel

by restricting core-to-core communication to spike events,

which occur relatively sparsely in time. Our implementation is

purely digital, resulting in reliable and deterministic operation

that achieves for the first time one-to-one correspondence with

a software simulator. At 45pJ per spike, our core is readily

scalable and provides a platform for implementing a wide array

of real-time computations. As an example, we demonstrate a

sound localization system using coincidence-detecting neurons.

I. INTRODUCTION

Neural systems in biology [1] are capable of an incredible

range of real-time computations with metabolic constraints

that require them to maintain strict energy efficiency. Tasks

such as pattern recognition, sensory reconstruction and motor

pattern generation are carried out by these dense, lowpower

neural circuits much more efficiently than traditional

computers. In these systems nerve cells called neurons are

the basic computational units. Neurons communicate with

one another through the generation and modulation of spike

trains where an individual spike is an all-or-nothing pulse.

The junction between the output of one neuron and the input

of another is called a synapse. The human brain consists of

a staggering number of neurons and synapses—over a 100

billion neurons and over 1 trillion synapses.

While the simulation of large-scale brain-like networks

has become feasible with modern supercomputers [2], the

power and space they require prevent them from being useful

in mobile systems for real-world tasks. On the other hand,

VLSI implementations of these networks—referred to as

neuromorphic chips [3]—can approach the area and power

efficiency of their biological counterparts and can therefore

be used for a wide range of real-time applications involving

machine perception and learning.

Traditionally, neuromorphic designs used continuous-time

analog circuits to model biological components, and digital

asynchronous circuits for spike communication [4]. Analog

circuits have been popular in the past, since they are

compact, and reduce power consumption by directly using

the I-V relationship of transistors to mimic the dynamics

of neurons. Dense analog circuits however are sensitive

to fabrication process variations, ambient temperatures and

noisy environments, making it difficult to configure circuits

that operate reliably under a wide range of external

parameters. This limited correspondence between what the

software (the neural algorithm) has been configured to do

and how the hardware (the analog implementation) functions

is an obstacle to algorithm development and deployment and

therefore limits the usefulness of these chips. In addition the

lack of high-density capacitors and increasing sub-threshold

currents in the latest fabrication technologies make analog

implementations even more difficult and unreliable.

In contrast to the continuous-time operation of analog

circuits, the discrete-time operation of digital circuits can

also be used to replicate neural activity. In fact, discreteevent

simulations are the primary method of study in computational

neuroscience research [5]. In this paper we introduce

a purely digital implementation of a neuromorphic system.

Using low-power event-driven circuits and the latest process

nodes we overcome the problems of analog neuromorphic

circuits without sacrificing area and power budgets. The

operation of the digital implementation is completely deterministic,

producing one-to-one correspondence with software

neural simulators, thereby ensuring that any algorithm

developed in software will work in hardware despite process

variability.

Deterministic operation of brain-like networks can also

be achieved on digital commodity chips, namely a DSP,

a GPU or a FPGA. However the parallel and event-driven

nature of these networks is not a natural fit to the sequential

processing model of conventional computer architectures.

A large bandwidth is necessary to communicate spikes

between the physically separated processor and memory

in these architectures, leading to high power consumption

and limited scaling. In contrast, we integrate a crossbar

synapse array with our neurons resulting in a tight locality

between memory (the synapses) and computation (the neurons).

The asynchronous design methodology that we use

fits naturally to the distributed processing of neurons and

ensures that power dissipation of inactive parts of the system

are kept at a minimum. Our quasi-delay-insensitive (QDI)

[6] implementation leads to extremely robust circuits that

remain operational under a wide range of process, voltage

and temperature variations, making them ideally suited to

mobile, embedded applications.

As our main contribution, we present the design and

implementation of a scalable asynchronous neurosynaptic

core. In this paper we discuss: (i) the asynchronous circuits

that mimic central elements of biological neural systems;

(ii) an architecture that integrates computation, communication

and memory; (iii) the asynchronous communication

infrastructure required to accomodate the architecture; and

(iv) the synchronization mechanisms required to maintain

a one-to-one correspondence with software (this is the

first neuromorphic system to demonstrate such an equivalence).

Our prototype chip consists of a single core with

256 digital leaky-integrate-and-fire neurons, 1024 inputs,

and 1024Ч256 programmable binary synapses implemented

with a SRAM crossbar array. The entire core fits in a

4.2mm2 footprint in IBM’s 45 nm SOI process and consumes

45pJ per spike in active power.

II. ARCHITECTURE AND OPERATION

A. Neurons and Synapses

The computational power of brain-like networks comes

from the electrophysiological properties of individual neurons

as well as their synaptic connections that form a

neural network. Neurons may be modeled at various levels

of biophysical detail. The leaky integrate-and-fire model

is a standard approximation widely used in computational

studies since it captures the behavior of real neurons in a

range of situations and offers an efficient implementation.

We use this neuron model as the basic computational unit

of our core.

The neurons in the chip are interconnected through axons

and synapses. Each axon may correspond to the output of

a neuron in the same core or somewhere else in a large

system of many cores. Some axons may also be driven by

embedded sensors or some external driver. The connection

between axon j and neuron i is represented as Sji. Each

axon is parameterized by a type Gj that can take one of three

different values indicating the type of synapse (e.g. strong

excitatory, weak excitatory or inhibitory) that the axon forms

with neurons it connects to. Each neuron is parameterized

by a leakage current L, a spike threshold θ and three synapse

weights W0, W1, W2 that correspond to the different axon

types. All these parameters are configurable during start-up.

The core implements a discrete-event simulation where

the neuron states are updated at each timestep according to

external input and interconnectivity. The state of neuron i

Fig. 1. Top: Architecture of the neurosynaptic core with K axons and

N neurons. Each junction in the crossbar representes a synapse between

an axon (row) and dendrite (column). Each neuron has a dedicated column

in the crossbar. Active synapses are represented by an open circle in the

diagram. An example sequence of events in the core is illustrated. The

scheduler accepts an incoming address event and communicates with the

axon token-ring. The token-ring activates axon 3 (A3) by asserting the third

wordline of the SRAM crossbar array. As a result, a synaptic event of type

G3 is delivered to neurons N1, N3, and NM. The AER transmitter sends

out the addresses of these neurons if they consequently spike. Bottom:

State variables and parameters of the system. All values are represented as

integers, and all constants are configurable at start-up.

at some time t, is represeted by its voltage Vi[t], while the

state of axon j is represented by its activity bit Aj [t].

The parameters and state variables of the system are

tabulated in Fig. 1(bottom). Neuron i receives the following

input from axon j:

i .

The neuron’s voltage is updated at each time step by

subtracting a leakage from its voltage and integrating the

synaptic input from all the axons:

 

When V [t] exceeds a threshold θ, the neuron produces

a spike (represented by a digital ‘1’ in its output) and its

voltage is reset to 0. We also enforce that negative voltages

are clipped back to 0 at the end of each time step (to replicate

a reversal potential of 0).