Finally, each kernel waits for another token to circulate through the control channel, presumably a work token, which will start the new active object. Figure 3 Click graphic to zoom by 1.

### 1. Introduction

Figure 4 is a runtime reconfigurable work farm featuring dynamic reconfiguration. The Boss object has a set of precompiled reconfiguration packets stored in its local memory. In normal operation, work packets are sent by the Boss to the selected worker objects in the work farm. New reconfiguration packets may be sent to the work farm anytime. When a Dynamic Object completes its reconfiguration, it sends an ID back to the Boss to indicate the completion of reconfiguration.

Upon receipt of the ID, the Boss initiates the distribution of new work packets to the newly reconfigured DO until its configuration is changed again. Figure 4 Click graphic to zoom by 2. Table 1 shows the time it takes to recon-figure objects. The Configuration column lists the total reconfiguration time in microseconds, from receipt of the first reconfiguration packet to the time Boss receives its ID to indicate completion of the reconfiguration process.

Table 1 Click graphic to zoom by 2. By using a Massively Parallel Processor Array and its straightforward programming model, engineers can develop very high-performance embedded military systems with full reconfigurability and runtime reliability.

## Real-Time Embedded Systems Laboratory

The MPPA platform is scalable for a long lifetime of investment in software methodology. Mike Butts , an Ambric Fellow, has an extensive background in computer architecture, especially large-scale reconfigurable hardware, and is the coinventor of hardware logic emulation using reconfigurable hardware. He has developed several processor architectures, reconfigurable chips, and systems at Mentor Graphics, Quickturn, Synopsys, Cadence, Tabula, which he cofounded, and Ambric. Mike has 38 U.

Paul Chen is Director of Strategic Business Development at Ambric and has extensive knowledge of sensor and image processing, graphics, computer system architecture, and real-time operating systems.

Paul published several papers on medical imaging, radar processing, embedded X Window Systems, and real-time UNIX operating systems on multiprocessor platforms. Ambric, Inc. Mike Butts Ambric. The performance requirements of high-performance embedded military systems are outstripping the capabilities of ordinary CPU and DSP processors.

## Multicore technology : architecture, reconfiguration, and modeling

These embedded systems are also required to become increasingly flexible, multimodal, and even dynamically reconfigurable in field operation. Reliable system development and operation remains essential. A new architecture, the Massively Parallel Processor Array MPPA , has been developed specifically for meeting the challenges of designing these embedded systems. Existing platforms for reconfigurable embedded computing Hardware implementations with ASIC chips cannot be reconfigured. Multi-core DSPs for embedded systems Multi-core CPUs with a shared memory architecture have become common in general-purpose computing, and some DSP vendors have begun adopting this ar-chitecture for embedded systems.

FPGAs and reconfiguration FPGA architecture was never conceived to support runtime reconfiguration, which is why it has been complex, limited, and relatively slow, despite some recent improvements. Advertisement [ x ]. A modified PSO-based technique based on multi-core sequential architecture is presented in the paper. The processing cores implementing the sequential architecture were connected via NoC for implementing a parallel architecture. The architecture was benchmarked against a pure software-based implementation indicating an average speed-up of Li et al.

### BOOK SERIES

To lower the segmentation architecture, the spatial information is used during the FCM training process. In addition, the architecture employs a high throughput pipeline to enhance the computation speed. Experimental results revealed that the proposed architecture implemented on a SoPC architecture attains a speedup of up to The proposed architecture therefore is an effective alternative for applications requiring real-time image segmentation and analysis. Existing PAM algorithms and their respective architectures are not that scalable and have an upper bound on the reduction in computational complexity they can achieve.

In the proposed system, we present a scalable parallel architecture of PAM algorithm which can exponentially reduce the computational complexity. It introduces the concept of working in a collaborative environment approach by dividing data into multiple processing units which perform homogeneous operations independently and finally give a combined result. We will use the following notation to formally describe the PAM algorithm. In each iteration of algorithm, a pair of medoid object m j and non-medoid object x i is selected which produces the best clustering when their roles are swapped.

The objective function used is the sum of the distances from each object to its closest medoid:.

- Publikationen - ARAMiS II.
- CRC Press Online - Series: Embedded Multi-Core Systems.
- 5 Series Titles.

In the first phase called Build Phase an initial clustering is obtained by the successive selection of K medoids. The first medoid is the one for which the sum of distances to all non-medoid objects is minimum. This is actually the most centrally located data point in set X. Subsequently, at each step another object is selected as a medoid, for which the objective function is minimum.

The process is continued until K medoids have been found. In the second phase of the algorithm called Swap Phase , it is attempted to improve the set M of medoids and therefore the clustering obtained by this set. The algorithm goes through each pair of objects m j , x h , where m j is a medoid and x h is non-medoid object and x h belongs to cluster j. The effect on the objective function is determined when a swap is carried out i. For each cluster j , the object x h is selected as its new medoid for which the objective function is minimized and thus the set M is updated.

This process is iterated until no further decrease in objective function value is possible or in other words there is no update in set M between two consecutive iterations. The aim of this paper is to propose a model to make PAM algorithm computationally less expensive by parallelizing its functionality such that it uses less resources when implemented on reconfigurable hardware. The whole working of our research revolves around the concept that how well we parallelize the PAM algorithm so that its overall computational complexity can be significantly reduced.

We concluded that this task can be performed well by following these steps:. Running these subtasks for equal subsets of data simultaneously on multiple homogeneous cores. The complete flow chart of the Partitioning Around Medoids algorithm in terms of this is shown in Figure 1. In the first subtask, one by one each object is temporarily selected as a medoid and the minimum value of the objective function is computed for the set of medoids selected up to current step.

The second subtask selects the temporary medoid as the actual medoid for which the objective function value is minimum. Algorithm 2 depicts the pseudo-code of the Build Phase. The first Subtask assigns each object to its closest medoid to form clusters.

- Multicore Technology: Architecture, Reconfiguration, and Modeling - CRC Press Book.
- Bodyhood.
- My Rubber Hebrew Nose - Nonsense Poems & Parodies;
- They Made It!: How Chinese, French, German, Indian, Iranian, Israeli and other foreign born entrepreneurs contributed to high te.

In the second subtask, one by one the role of each object within a cluster is swapped with its medoid and smallest value of the cost function is computed. The last subtask of this phase updates the medoid of the current cluster for which the cost function value is minimum. Subtasks 2 and 3 are repeated for all clusters. The corresponding pseudo-code is shown as Algorithm 3.

Now in order to reduce the computational complexity and improve the execution time, the PAM algorithm needs parallelism. We have identified from Algorithm 2 that the subtask 1 of Build Phase can be executed in parallel on multiple PEs for equal subsets of data, while subtask 2 will compute the final result of this phase. Similarly, it is clear from Algorithm 3 that subtasks 1 and 2 of the Swap Phase can be parallelized well. Our proposed architecture uses P number of homogeneous cores or Processing Elements PEs which are connected through an interconnect network such as a bus as shown in Figure 2.

Each PE has given access to all data points thus it can work in parallel with other PEs to achieve faster convergence and eventually an increased throughput.

A Global Control Unit is used to control the overall flow of the algorithm. The interconnecting network used in the design can be a bus-based, point to point or network-on-chip-based interface. The choice of interconnection is based on the complexity and requirement of the applications, e. The overall working of the parallel PAM for this multi-core processor model is described in the following steps:. A data set of size N is made completely divisible into the number of available cores P by appending zeros at the end of the data set so that equal subsets can be assigned to each core.

This step is repeated until K medoids are initialized, as described in Algorithm 4 below. Final results of Build Phase are sent to all PEs so that they can proceed to the next phase of algorithm. Each PE will tag all its assigned data objects with their closest cluster numbers.

These tags are stored in local memory associated with each data object. All PEs one by one broadcast their N P tags over the interconnect network so that each PE can have complete result of clustering. Steps 5 and 6 are repeated until no update in any of K medoids is reported.

## Publikationen

The complete work flow of the parallel PAM algorithm is depicted in Figure 3 below. At this stage we can explore the internal structure of a processing element at an abstract level. The Controller section consists of a local control unit to manage the overall sequencing of subtasks within a processing element and to manage communication with other PEs.