Transputers Epoch


Transputers Epoch


Nzenwata Uchenna J*., Adegbenle Adedeji A., and Adedokun Adewale J

Department of Computer Science, Babcock University, Nigeria


Global Journal of Artificial Intelligence

The transputer (Transistor Computer) was an innovative computer design of the 1980s from INMOS, a British semiconductor company based in Bristol. The transputer was conceived of as a building block for electronic systems comprising a processor, memory and a communication system. The transputer was unique in that each processor had a built-in simple operating system, memory and four high speed (20 Mbit/s full duplex) bi-directional serial links. The transputer is essentially a computer system on a chip. The links on the transputer allow connection to up to four other transputers or peripherals such as video graphics, floppy and hard disc drives, Ethernet networking and standard RS-232 serial ports. In this paper discusses the original purpose of the transputer, the architectural and the network design. It also lay emphasis on the factors that birth the dead end of the tranputer technology and the restoration project.


Keywords: Transputer, Ocam, Inmos, Floating Point Unit, Memory Management Unit.

Free Full-text PDF


How to cite this article:
Nzenwata Uchenna J., Adegbenle Adedeji A., and Adedokun Adewale J. Transputers Epoch. Global Journal of Artificial Intelligence, 2019; 1:5.


References:

1. What is a transputer_ – Quora. (n.d.).
2. Jaroš, J., Ohlídal, M., & Dvořák, V. (2005, Oct.). Evolutionary design of group communication schedules for interconnection networks. In International Symposium on Computer and Information Sciences (pp. 472-481). Springer, Berlin, Heidelberg.
3. Börger, E., & Stärk, R. (2012). Abstract state machines: a method for high-level system design And analysis. Springer Science &Business Media.
4. David May, parallel processing pioneer • The Register. (n.d.).
5. Fox, G. C., Williams, R. D., & Messina, G. C. (2014). Parallel computing works!. Elsevier.
6. Sheen, T., Allen, A. R., Ripke, A., & Woo, S. (1998). oc-X: an optimizing multiprocessor occam system for the PowerPC. In Architectures, Languages and Patterns for Parallel and Distributed Applications. IOS Press.
7. May, D. (2005). CSP, occam and Transputers. In Communicating Sequential Processes. The First 25 Years (pp. 75-84). Springer, Berlin, Heidelberg.
8. Hilton, C., & Ivimey, C. R. Legacy of the transputer. emergence, 80186, 19.
9. Morse, H. S. (2014). Practical parallel computing. Academic Press.
10. Nocetti, D. F. G., & Fleming, P. J. (2012). Parallel processing in digital control. Springer Science & Business Media.
11. Furber, S. B. (2017). VLSI RISC architecture and organization. Routledge.
12. Bhowmick, A., & Prasad, C. G. V. N. (2017). Time and cost optimization by grid computingover existing traditional IT systems in business environment. Int J, 5, 93-98.
13. Laplante, P., & Milojicic, D. (2016, Oct). Rethinking operating systems for rebooted computing. In 2016 IEEE International Conference on Rebooting Computing (ICRC) (pp. 1-8). IEEE.
14. MATSUI, K. (2015). Towards Transputing with Parallella!!.
15. Chertovskikh, A., & Rachek, I. (2014, Oct.). Using Transputer Computing Systems at the BudkerInstitute of Nuclear Physics. In Computer Technology in Russia and in the Former Soviet Union (SoRuCom), 2014 Third International Conference on (pp. 86-88). IEEE.
16. Transputer, T. H. E. (1993). 93 12 13 077’.
17. Haefner, J. W. (2018). Parallel computers and individual-based models: an overview. In Individual based models and approaches in ecology (pp. 126-164). Chapman and Hall/CRC.
18. Manet, P., & Rousseau, B. (2016). “Tile-based processor architecture model for high efficiencyEmbedded homogeneous multicore platforms.” U.S. Patent No. 9,275,002. Washington, DC: U.S. Patent and Trademark Office.
19. Cohen, R., & Wang, T. (2014). Intel embedded hardware platform. In Android Application Development for the Intel® Platform (pp. 19-46). Apress, Berkeley, CA.
20. Hass, D. T., Kuila, K., & Shahid, A. (2017). U.S. Patent No. 9,596,324. Washington, DC:
U.S. Patent and Trademark Office.
21. Itagaki, T., Manning, P. D., Purvis, A., Purvis, A., Road, S., & Dh, D. (2018). Distributed Parallel Learned from a 160- Transputer Network Processing : Lessons, 21(4), 42–54.
22. Bull, M. (2016). Students’ Guide to Programming Languages. Elsevier.
23. Heath, S. (2014). Microprocessor Architectures: RISC, CISC and DSP. Elsevier.
24. He, J., Josephs, M. B., & Hoare, C. A. R. (2015). A theory of synchrony and asynchrony.
25. Denning, P. J., & Lewis, T. G. (2017). Exponential laws of computing growth.
26. Agullo, E., Aumage, O., Faverge, M., Furmento, N., Pruvost, F., Sergent, M., & Thibault, S.P. (2017).Achieving high performance on supercomputers with a sequential task-based programming model. IEEE Transactions on Parallel and Distributed Systems.
27. Haase, G., & Pester, M. (2013). A Brief History of the Parallel Dawn in Karl-Marx Stadt/Chemnitz. In Advanced Finite Element Methods and Applications (pp. 1-26). Springer, Berlin, Heidelberg.
28. Clark, I. (2006). Microprocessor Course Part I Processor History and Selection.
29. Chintalapati, L. V. B. (2016). Integration of Mission Control System, On-board Computer Core and spacecraft Simulator for a Satellite Test Bench.
30. Project Aims The National Museum of Computing. (n.d.).