Rabu, 06 Juni 2012

CONTOH PENERAPAN KOMPUTASI


Penerapan Komputasi
1.      Penggunaan akun facebook, Twitter, Yahoo Messenger, Gmail, dll menggunakan teknologi Komputasi Cloud

Menurut NIST (National Institute of Standards and Technology) sendiri komputasi awan harus memenuhi 5 kriteria di bawah ini :

On-demand self-service – setiap orang bisa mendaftarkan dirinya sendiri tanpa bantuan siapapun, dan menikmati layanan sesuai kebutuhan
Broad network access – layanan komputasi awan bisa diakses dari manapun, dengan perangkat apapun
Resource pooling – semua sumber komputasi dikumpulkan dan dipergunakan bersama-sama
Rapid elasticity – setiap kebutuhan bisa dilayani secara elastik tergantung kebutuhan saat itu
Measured service – semua layanan bisa diukur (dan ditagih biayanya) sesuai dengan penggunaan aktual
Keunggulan komputasi awan ini adalah efisiensi yang sangat tinggi, apalagi jika menggunakan banyak data center tersebar yang berukuran sangat besar (bisa sampai ratusan ribu server per data center).

2.     Huawei gunakan komputer virtual menggunakan teknologi Komputasi Cloud/Awan

 Huawei berkomitmen untuk menyediakan aplikasi-aplikasi dan solusi-solusi infrastruktur terbaik di dunia untuk mendorong penggunaan aplikasi-aplikasi serta layanan berbasil awan. Dengan melibatkan 45 ribu orang teknisi kami yang semuanya menggunakan komputer virtual merupakan bukti kesiapan kami dalam menyediakan teknologi ini kepada pelanggan di semua industri, termasuk di Indonesia dan negara lain,” ungkap Li Wenzhi, CEO Huawei Indonesia.

Untuk memaksimalkan skalabilitas dan fleksibilitas yang dihadirkan oleh komputasi awan, Huawei memulai dengan memanfaatkan teknologi komputer awan ini di pusat penelitian dan pengembangan Huawei di Shanghai pada tahun 2009, dan hari ini, komputasi awan telah digunakan oleh lebih dari 45 ribu teknisi Huawei di seluruh dunia.

Dibandingkan dengan teknologi komputer konvensional, penggunaan komputer awan diperkirakan dapat memanfaatkan diperkirakan dapat menghemat 30 persen dibandingkan dengan investasi bisnis tradisional dan juga dapat mengurangi 73 persen konsumsi listrik, serta memaksimalkan fungsi CPU dengan peningkatan kapasitas dari 5 persen menjadi 60 persen. Waktu untuk pemasangan juga menjadi sangat efisien untuk komputasi awan yaitu dari sebelumnya tiga bulan menjadi satu minggu saja.

Para teknisi menggunakan program thin client agar bisa dengan mudah mengakses komputer virtual mereka kapan saja sehingga akan meningkatkan efisiensi kerja mereka. Berkat komputasi awan, data tidak lagi disimpan di setiap komputer melainkan pada server di pusat data awan.

3. Penggunaan Komputasi dalam Bidang Kedokteran
Salah satu contoh penggunaan komputasi adalah dalam bidang kedokteran, yaitu dalam pencarian obat. Untuk meramalkan aktivitas sejumlah besar calon obat, seorang praktisi komputasi meniru suasana pengujian aktivitasnya di laboratorium basah dengan model-model Kimia (seperti: struktur 3 dimensi calon obat) sebagai pengganti bahan-bahan laboratorium tersebut. Model-model ini kemudian dinyatakan di dalam persamaan-persamaan Matematika yang kemudian diselesaikan oleh komputer dengan kapasitas dan kelajuan yang melebihi kapasitas dan kelajuan manusia. Hasilnya berupa suatu bilangan bagi tiap calon obat yang dapat dibandingkan satu dengan yang lainnya. Perbandingan ini merupakan ramalan tingkat aktivitas suatu calon obat relatif terhadap calon obat lainnya. Demikianlah cara meramalkan aktivitas calon obat dengan metode komputasi. Dengan demikian, calon-calon obat yang diramalkan akan memberikan aktivitas yang rendah dapat dihindari.

CONTOH KOMPUTASI 2


Jenis-jenis Komputasi Modern

Komputasi modern terbagi tiga macam, yaitu komputasi mobile (bergerak), komputasi grid, dan komputasi cloud (awan). Penjelasan lebih lanjut dari jenis-jenis komputasi modern sebagai berikut :

1. Mobile computing
Mobile computing atau komputasi bergerak memiliki beberapa penjelasan, salah satunya komputasi bergerak merupakan kemajuan teknologi komputer sehingga dapat berkomunikasi menggunakan jaringan tanpa menggunakan kabel dan mudah dibawa atau berpindah tempat, tetapi berbeda dengan komputasi nirkabel. Contoh dari perangkat komputasi bergerak seperti GPS, juga tipe dari komputasi bergerak seperti smart phone, dan lain sebagainya.

Beberapa keterbatasan dan resiko dari mobile computing:

- Kurangnya Bandwith
Akses internet pada peralatan ini umumnya lebih lambat dibandingkan dengan koneksi kabel.

- Gangguan Transmisi
Banyak faktor yang bisa menyebabkan gangguan sinyal pada mobile computing seperti cuaca, medan dan jarak alat mobile computing dengan titik pemancar sinyal terdekat.


2. Grid computing
Komputasi grid menggunakan komputer yang terpisah oleh geografis, didistibusikan dan terhubung oleh jaringan untuk menyelasaikan masalah komputasi skala besar. Ada beberapa daftar yang dapat dugunakan untuk mengenali sistem komputasi grid, adalah :

- Sistem untuk koordinat sumber daya komputasi tidak dibawah kendali pusat.
- Sistem menggunakan standard dan protocol yang terbuka.
- Sistem mencoba mencapai kualitas pelayanan yang canggih, yang lebih baik diatas kualitas komponen individu pelayanan komputasi grid.

3. Cloud computing
Komputasi cloud merupakan gaya komputasi yang terukur dinamis dan sumber daya virtual yang sering menyediakan layanan melalui internet. Komputasi cloud menggambarkan pelengkap baru, konsumsi dan layanan IT berbasis model dalam internet, dan biasanya melibatkan ketentuan dari keterukuran dinamis dan sumber daya virtual yang sering menyediakan layanan melalui internet.

Adapun perbedaan antara komputasi mobile, komputasi grid dan komputasi cloud, dapat dilihat penjelasannya dibawah ini :

- Komputasi mobile menggunakan teknologi komputer yang bekerja seperti handphone, sedangkan komputasi grid dan cloud menggunakan komputer.

- Biaya untuk tenaga komputasi mobile lebih mahal dibandingkan dengan komputasi grid dan cloud.

- Komputasi mobile tidak membutuhkan tempat dan mudah dibawa kemana-mana, sedangkan grid dan cloud membutuhkan tempat yang khusus.

- Untuk komputasi mobile proses tergantung si pengguna, komputasi grid proses tergantung pengguna mendapatkan server atau tidak, dan komputasi cloud prosesnya membutuhkan jaringan internet sebagai penghubungnya.

CONTOH KOMPUTASI

Contoh komputasi modern sampai dengan lahirnya ENIAC :

*Konrad Zuse’s electromechanical “Z mesin”.Z3 (1941) sebuah mesin pertama menampilkan biner aritmatika, termasuk aritmatika floating point dan ukuran programmability. Pada tahun 1998, Z3 operasional pertama di dunia komputer itu di anggap sebagai Turing lengkap.

*Berikutnya Non-programmable Atanasoff-Berry Computer yang di temukan pada tahun 1941 alat ini menggunakan tabung hampa berdasarkan perhitungan, angka biner, dan regeneratif memori kapasitor.Penggunaan memori regeneratif diperbolehkan untuk menjadi jauh lebih seragam (berukuran meja besar atau meja kerja).

*Selanjutnya komputer Colossus ditemukan pada tahun 1943, berkemampuan untuk membatasi kemampuan program pada alat ini menunjukkan bahwa perangkat menggunakan ribuan tabung dapat digunakan lebih baik dan elektronik reprogrammable.Komputer ini digunakan untuk memecahkan kode perang Jerman.
The Harvard Mark I ditemukan pada 1944, mempunyai skala besar, merupakan komputer elektromekanis dengan programmability terbatas.

*Lalu lahirlah US Army’s Ballistic Research Laboratory ENIAC ditemukan pada tahun 1946, komputer ini digunakan unutk menghitung desimal aritmatika dan biasanya disebut sebagai tujuan umum pertama komputer elektronik (ENIAC merupaka generasi yang sudah sangat berkembang di zamannya sejak komputer pertama Konrad Zuse ’s Z3 yang ditemukan padatahun 1941).


penerapan komputasi modelling


MESIN MEALY

Dalam teori komputasi sebagai konsep dasar sebuah komputer, mesin Mealy adalah otomasi fase berhingga (finite state automaton atau finite state tranducer) yang menghasilkan keluaran berdasarkan fase saat itu dan bagian masukan/input. Dalam hal ini, diagram fase (state diagram) dari mesin Mealy memiliki sinyal masukan dan sinyal keluaran untuk tiap transisi. Prinsip ini berbeda dengan mesin Moore yang hanya menghasilkan keluaran/output pada tiap fase.
Nama Mealy diambil dari “G. H. Mealy” seorang perintis mesin-fase (state-machine) yang menulis karangan “A Method for Synthesizing Sequential Circuits” pada tahun 1955.

MESIN MOORE

Dalam teori komputasi sebagai prinsip dasar komputer, mesin Moore adalah otomasi fase berhingga (finite state automaton) di mana keluarannya ditentukan hanya oleh fase saat itu (dan tidak terpengaruh oleh bagian masukan/input). Diagram fase (state diagram) dari mesin Moore memiliki sinyal keluaran untuk masing-masing fase. Hal ini berbeda dengan mesin Mealy yang mempunyai keluaran untuk tiap transisi.
Nama Moore diambil dari ”Edward F. Moore” seorang ilmuwan komputer dan perintis mesin-fase (state-machine) yang menulis karangan “Gedanken-experiments on Sequential Machines”.
Petri net
Petri net adalah salah satu model untuk merepresentasikan sistem terdisribusi diskret. Sebagai sebuah model, Petri net merupakan grafik 2 arah yang terdiri dari placetransition, dan tanda panah yang menghubungkan keduanya. Di samping itu, untuk merepresentasikan keadaan sistem, tokendiletakkan pada place tertentu. Ketika sebuah transition terpantik, token akan bertransisi sesuai tanda panah.
Petri net pertama kali diajukkan oleh Carl Adam Petri pada tahun 1962.
Beberapa pengaplikasian Model Komputasi dalam kehidupan seperti :

1. MODEL KOMPUTASI CERDAS PROAKTIF UNTUK MONITORING PROYEK-PROYEK TEKNOLOGI INFORMASI MENGGUNAKAN SISTEM MULTIAGEN OTONOMOS

Model komputasi cerdas proaktif merupakan satu bagian penting dari pendekatan sistem kecerdasan buatan yang dapat diterapkan untuk persoalan yang bersifat dinamis dan terdistribusi. Termasuk untuk mendukung otomasisasi dari kegiatan manajemen proyek dalam sebuah perusahaan. Misalnyai untuk mengetahui secara otomatis dan realtime dari ketepatan ataupun ketidaksesuaian antara jadwal yang telah ditetapkan dibandingkan dengan pelaksanaan proyek.
Pada makalah ini dikaji, didesain, dan dievaluasi sebuah model komputasi proaktif untuk monitoring pelaksanaan proyek-proyek berbasis agen cerdas. Metode prometheues digunakan untuk membangun prototip. Kode program dibangun dengan bahasa Jadex agen framework. Berdasarkan hasil pengujian, agen-agen cerdas otonomos yang dibangun terlihat telah mampu menunjukkan kemampuan proaktif setiap saat untuk mencari dan menyajikan informasi monitoring proyek terhadap beberapa uji sampel data proyek teknologi informasi yang disimulasikan.
2. Model Komputasi Rangkaian Ekivalen Saluran Transmisi Mikrostrip Dengan Matlab
Saluran transmisi merupakan suatu media yang digunakan untuk mengirimkan sinyal atau gelombang dari sumber sinyal kepada penerima. Saluran transmisi dapat dimodelkan kedalam suatu rangkaian listrik atau rangkaian akivalen yang berfungsi sebagai medium mengalirnya gelombang listrik. Pada Tugas Akhir ini akan dibahas model komputasi rangkaian ekivalen saluran mikrostrip untuk menganalisis karakteristik perambatan gelombang pada saluran transmisi mikrostrip dengan cara memodelkan saluran transmisi tersebut kedalam suatu rangkaian ekivalen dan dikomputasikan menggunakan bantuan perangkat lunak Matlab sehingga didapat model perambatan gelombang pada saluran mikrostrip. Dari hasil komputasi yang dilakukan, didapatkan bahwa gelombang yang merambat pada saluran mikrostrip untuk tebal dielektrik H = 0,76 mm dengan impedansi karakteristik Z0 = 141,1855 Ω, konstanta fasa β = 91,182 rad/m, dan konstanta redaman α = 0,3712 Np/m memiliki tingkat degradasi sinyal yang lebih kecil dibandingkan dengan tebal dielektrik H yang lain, terlihat dari selubung (envelope) gelombang yang hampir rata.
3. Model Komputasi Mesin FSA Sebagai Pengambil Keputusan Dalam Pengenalan Suku Kata Bahasa Indonesia
Membahas bagaimana mengenali suku-suku kata dalam kalimat Bahasa Indonesia menggunakan Finite State Automata, yaitu suatu model dari mesin pengenal yang mampu mengenali kelas bahasa yang disebut Bahasa Reguler.

PEMBAGIAN MODEL KOMPUTASI


Pembagian Model komputasi ada 3 yaitu :
1. Mesin Mealy
Dalam teori komputasi sebagai konsep dasar sebuah komputer, mesin Mealyadalah otomasi fasa berhingga (finite state automaton atau finite state tranducer) yang menghasilkan keluaran berdasarkan fasa saat itu dan bagian masukan/input. Dalam hal ini, diagram fasa (state diagram) dari mesin Mealy memiliki sinyal masukan dan sinyal keluaran untuk tiap transisi. Prinsip ini berbeda dengan mesin Moore yang hanya menghasilkan keluaran/output pada tiap fasa.
Nama Mealy diambil dari “G. H. Mealy” seorang perintis mesin-fasa (state-machine) yang menulis karangan “A Method for Synthesizing Sequential Circuits” pada tahun 1955.
2. Mesin Moore
Dalam teori komputasi sebagai prinsip dasar komputer, mesin Moore adalah otomasi fasa berhingga (finite state automaton) di mana keluarannya ditentukan hanya oleh fasa saat itu (dan tidak terpengaruh oleh bagian masukan/input). Diagram fasa (state diagram) dari mesin Moore memiliki sinyal keluaran untuk masing-masing fasa. Hal ini berbeda dengan mesin Mealy yang mempunyai keluaran untuk tiap transisi.
Nama Moore diambil dari “Edward F. Moore” seorang ilmuwan komputer dan perintis mesin-fasa (state-machine) yang menulis karangan “Gedanken-experiments on Sequential Machines”.
3. Petri Net
Petri net adalah salah satu model untuk merepresentasikan sistem terdistribusi diskret. Sebagai sebuah model, Petri net merupakan grafik 2 arah yang terdiri dari placetransition, dan tanda panah yang menghubungkan keduanya. Di samping itu, untuk merepresentasikan keadaan sistem, tokendiletakkan pada place tertentu. Ketika sebuah transition terpantik, token akan bertransisi sesuai tanda panah.
Petri net pertama kali diajukkan oleh Carl Adam Petri pada tahun 1962.

MODEL KOMPUTASI

Model Komputasi Ada tiga model dasar komputasional-- fungsional, logika, dan imperatif. Sebagai tambahan terhadap satuan nilai-nilai dan operasi yang berhubungan, masing-masing model komputasional mempunyai satu set operasi yang digunakan untuk menggambarkan komputasi. 

a. Model Fungsional : terdiri dari satu set nilai-nilai, fungsi-fungsi dan operasi aplikasi fungsi dan komposisi fungsi. Fungsi dapat mengambil fungsi lain sebagai argumentasi dan mengembalikan fungsi sebagai hasil (higher-order function). Suatu program adalah koleksi definisi fungsi-fungsi dan suatu komputasi adalah aplikasi fungsi.

 b. Model Logika : terdiri dari satu set nilai-nilai, definisi hubungan dan kesimpulan logis. Program terdiri dari definisi hubungan dan suatu komputasi adalah suatu bukti(suatu urutan kesimpulan).

 c. Model Imperatif : terdiri dari satu set nilai-nilai yang mencakup suatu keadaan dan operasi tugas untuk memodifikasi pernyataan. Pernyataan adalah set pasangan nilai-nama dari konstanta dan variabel. Program terdiri dari urutan tugas dan suatu komputasi terdiri dari urutan pernyataan.

ref: http://imranzulmi.blogspot.com/2011/04/model-komputasi.html

Minggu, 25 Maret 2012

Kelebihan dan Kekurangan dari Jenis Implementasi Komputasi Modern

Berdasarkan dari sejarahnya singkatnya, komputasi modern mempunyai pencetus dasar-dasar komputasi modern yang pertama. Tokoh ini ialah seorang ilmuan kelahiran Hungaria pada tanggal 28 Desember 1903 yang bernama John Van Neuman. Van neuman lulus pada tahun 1926 dengan memegang 2 gelar yaitu S1 pada bidang teknik kimia dari ETH dan gelar Doktor pada bidang matematika dari Universitas Budaepest. Kejeniusannya menjadikannya sebagai salah satu ilmuan besar pada abad 21 yang menguasai bidang matematika, teori kuantum, teori game, fisika nuklir dan ilmu komputer. Berdasarkan pengetahuan yang ia miliki ia turut ikut andil besar dalam pembuatan bom atom di Los Alomos pada perang dunia II. Awal mulanya karena ketertarikan John Von Neumann pada hidrodinamika dan kesulitan penyelesaian persamaan diferensial parsial nonlinier yang digunakan, Von Neumann kemudian beralih dalam bidang komputasi. Sebagai konsultan pada pengembangan ENIAC, dia merancang konsep arsitektur komputer yang masih dipakai sampai sekarang. Arsitektur Von Nuemann adalah komputer dengan program yang tersimpan (program dan data disimpan pada memori) dengan pengendali pusat, I/O, dan memori.

Berikut ini Jenis-jenis komputasi modern :

1. Mobile computing

Mobile computing atau komputasi bergerak memiliki beberapa penjelasan, salah satunya komputasi bergerak merupakan kemajuan teknologi komputer sehingga dapat berkomunikasi menggunakan jaringan tanpa menggunakan kabel dan mudah dibawa atau berpindah tempat, tetapi berbeda dengan komputasi nirkabel.
Contoh dari perangkat komputasi bergerak seperti GPS, juga tipe dari komputasi bergerak seperti smart phone, dan lain sebagainya.

Kelebihan

Aplikasi yang luas
Bergerak/berpidah lokasi secara bebas
Bebas berpindah jaringan
Kekurangan

Minimnya Bandwith
Akses internet pada peralatan ini lambat jika dibandingkan dengan akses dengan kabel, akan tetapi dengan menggunakan teknologi GPRS, EDGE dan jaringan 3G, LAN Nirkabel berkecepatan tinggi tidak terlalu mahal tetapi memiliki bandwith terbatas.
Konsumsi tenaga sangat bergantung pada daya tahan baterai.
Gangguan Transmisi jarak dengan pemancar sinyal dan cuaca sangat mempengaruhi transimis data pada mobile computing.
Potensi Terjadinya Kecelakaan


2. Grid computing

Komputasi grid menggunakan komputer yang terpisah oleh geografis, didistibusikan dan terhubung oleh jaringan untuk menyelasaikan masalah komputasi skala besar.
Ada beberapa daftar yang dapat dugunakan untuk mengenali sistem komputasi grid, adalah :Sistem untuk koordinat sumber daya komputasi tidak dibawah kendali pusat. Sistem menggunakan standard dan protocol yang terbuka.
Sistem mencoba mencapai kualitas pelayanan yang canggih, yang lebih baik diatas kualitas komponen individu pelayanan komputasi grid.

Kelebihan :

Perkalian dari sumber daya: Resource pool dari CPU dan storage tersedia ketika idle.
Lebih cepat dan lebih besar: Komputasi simulasi dan penyelesaian masalah dapat berjalan lebih cepat dan mencakup domain yang lebih luas.
Software dan aplikasi: Pool dari aplikasi dan pustaka standard, akses terhadap model dan perangkat berbeda, metodologi penelitian yang lebih baik.
Data: Akses terhadap sumber data global dan hasil penelitian lebih baik.
Ukuran dan kompleksitas dari masalah mengharuskan orang-orang dalam beberapa organisasi berkolaborasi dan berbagi sumber daya komputasi, data dan instrumen sehingga terwujud bentuk organisasi baru yaitu virtual organization.
Kekurangan

Hambatan-hambatan dalam pengaplikasian teknologi indonesia adalah sebagai berikut :

Manajemen institusi yang terlalu birokratis menyebabkan mereka enggan untuk merelakan fasilitas yang dimiliki untuk digunakan secara bersama agar mendapatkan manfaat yang lebih besar bagi masyarakat luas.
Masih sedikitnya sumber daya manusia yang kompeten dalam mengelola grid computing.
Kurangnya pengetahuan yang mencukupi bagi teknisi IT maupun user non teknisi mengenai manfaat dari grid computing itu sendiri.


3. Cloud computing

Komputasi cloud merupakan gaya komputasi yang terukur dinamis dan sumber daya virtual yang sering menyediakan layanan melalui internet. Komputasi cloud menggambarkan pelengkap baru, konsumsi dan layanan IT berbasis model dalam internet, dan biasanya melibatkan ketentuan dari keterukuran dinamis dan sumber daya virtual yang sering menyediakan layanan melalui internet.

Kelebihan Cloud Computing

Menghemat biaya investasi awal untuk pembelian sumber daya.
Bisa menghemat waktu sehingga perusahaan bisa langsung fokus ke profit dan berkembang dengan cepat.
Membuat operasional dan manajemen lebih mudah karena sistem pribadi/perusahaan yang tersambung dalam satu cloud dapat dimonitor dan diatur dengan mudah.
Menjadikan kolaborasi yang terpercaya dan lebih ramping.
Mengehemat biaya operasional pada saat realibilitas ingin ditingkatkan dan kritikal sistem informasi yang dibangun.

Kekurangan Cloud Computing

Komputer akan menjadi lambat atau tidak bisa dipakai sama sekali jika internet bermasalah atau kelebihan beban. Dan juga perusahaan yang menyewa layanan dari cloud computing tidak punya akses langsung ke sumber daya. Jadi, semua tergantung dari kondisi vendor/penyedia layanan cloud computing. Jika server vendor rusak atau punya layanan backup yang buruk, maka perusahaan akan mengalami kerugian besar.


Dari Jenis-jenis komputasi modern masing-masing jenis dari komputasi mobile, komputasi grid dan komputasi cloud mempunyai persamaan dan perbedaan, diantaranya :

Persamaan :

Ketiganya merupakan metode untuk melakukan proses komputasi dan memecahkan sebuah masalah serta menemukan solusinya.
Ketiganya membutuhkan alat pengolah data modern seperti PC,laptop maupun handphone untuk menjalankannya.

Perbedaan :

Komputasi mobile menggunakan teknologi komputer yang bekerja seperti handphone, sedangkan komputasi grid dan cloud menggunakan komputer.
Biaya untuk tenaga komputasi mobile lebih mahal dibandingkan dengan komputasi grid dan cloud.
Komputasi mobile tidak membutuhkan tempat dan mudah dibawa kemana-mana, sedangkan grid dan cloud membutuhkan tempat yang khusus.
Untuk komputasi mobile proses tergantung si pengguna, komputasi grid proses tergantung pengguna mendapatkan server atau tidak, dan komputasi cloud prosesnya membutuhkan jaringan internet sebagai penghubungnya.


sumber:
http://yudhi-pratama.it-kosongsatu.com/?p=13

High-Speed Memory (The Modern History of Computing)

High-Speed Memory

The EDVAC and ACE proposals both advocated the use of mercury-filled tubes, called ‘delay lines’, for high-speed internal memory. This form of memory is known as acoustic memory. Delay lines had initially been developed for echo cancellation in radar; the idea of using them as memory devices originated with Eckert at the Moore School. Here is Turing's description:

It is proposed to build "delay line" units consisting of mercury … tubes about 5′ long and 1″ in diameter in contact with a quartz crystal at each end. The velocity of sound in … mercury … is such that the delay will be 1.024 ms. The information to be stored may be considered to be a sequence of 1024 ‘digits’ (0 or 1) … These digits will be represented by a corresponding sequence of pulses. The digit 0 … will be represented by the absence of a pulse at the appropriate time, the digit 1 … by its presence. This series of pulses is impressed on the end of the line by one piezo-crystal, it is transmitted down the line in the form of supersonic waves, and is reconverted into a varying voltage by the crystal at the far end. This voltage is amplified sufficiently to give an output of the order of 10 volts peak to peak and is used to gate a standard pulse generated by the clock. This pulse may be again fed into the line by means of the transmitting crystal, or we may feed in some altogether different signal. We also have the possibility of leading the gated pulse to some other part of the calculator, if we have need of that information at the time. Making use of the information does not of course preclude keeping it also. (Turing [1945], p. 375)
Mercury delay line memory was used in EDSAC, BINAC, SEAC, Pilot Model ACE, EDVAC, DEUCE, and full-scale ACE (1958). The chief advantage of the delay line as a memory medium was, as Turing put it, that delay lines were "already a going concern" (Turing [1947], p. 380). The fundamental disadvantages of the delay line were that random access is impossible and, moreover, the time taken for an instruction, or number, to emerge from a delay line depends on where in the line it happens to be.

In order to minimize waiting-time, Turing arranged for instructions to be stored not in consecutive positions in the delay line, but in relative positions selected by the programmer in such a way that each instruction would emerge at exactly the time it was required, in so far as this was possible. Each instruction contained a specification of the location of the next. This system subsequently became known as ‘optimum coding’. It was an integral feature of every version of the ACE design. Optimum coding made for difficult and untidy programming, but the advantage in terms of speed was considerable. Thanks to optimum coding, the Pilot Model ACE was able to do a floating point multiplication in 3 milliseconds (Wilkes's EDSAC required 4.5 milliseconds to perform a single fixed point multiplication).

In the Williams tube or electrostatic memory, previously mentioned, a two-dimensional rectangular array of binary digits was stored on the face of a commercially-available cathode ray tube. Access to data was immediate. Williams tube memories were employed in the Manchester series of machines, SWAC, the IAS computer, and the IBM 701, and a modified form of Williams tube in Whirlwind I (until replacement by magnetic core in 1953).

Drum memories, in which data was stored magnetically on the surface of a metal cylinder, were developed on both sides of the Atlantic. The initial idea appears to have been Eckert's. The drum provided reasonably large quantities of medium-speed memory and was used to supplement a high-speed acoustic or electrostatic memory. In 1949, the Manchester computer was successfully equipped with a drum memory; this was constructed by the Manchester engineers on the model of a drum developed by Andrew Booth at Birkbeck College, London.

The final major event in the early history of electronic computation was the development of magnetic core memory. Jay Forrester realised that the hysteresis properties of magnetic core (normally used in transformers) lent themselves to the implementation of a three-dimensional solid array of randomly accessible storage points. In 1949, at Massachusetts Institute of Technology, he began to investigate this idea empirically. Forrester's early experiments with metallic core soon led him to develop the superior ferrite core memory. Digital Equipment Corporation undertook to build a computer similar to the Whirlwind I as a test vehicle for a ferrite core memory. The Memory Test Computer was completed in 1953. (This computer was used in 1954 for the first simulations of neural networks, by Belmont Farley and Wesley Clark of MIT's Lincoln Laboratory (see Copeland and Proudfoot [1996]).

sumber:
http://plato.stanford.edu/entries/computing-history/

Turing's Automatic Computing Engine (The Modern History of Computing)

Turing's Automatic Computing Engine

Turing and Newman were thinking along similar lines. In 1945 Turing joined the National Physical Laboratory (NPL) in London, his brief to design and develop an electronic stored-program digital computer for scientific work. (Artificial Intelligence was not far from Turing's thoughts: he described himself as ‘building a brain’ and remarked in a letter that he was ‘more interested in the possibility of producing models of the action of the brain than in the practical applications to computing’.) John Womersley, Turing's immediate superior at NPL, christened Turing's proposed machine the Automatic Computing Engine, or ACE, in homage to Babbage's Difference Engine and Analytical Engine.

Turing's 1945 report ‘Proposed Electronic Calculator’ gave the first relatively complete specification of an electronic stored-program general-purpose digital computer. The report is reprinted in full in Copeland 2005.

The first electronic stored-program digital computer to be proposed in the U.S. was the EDVAC (see below). The ‘First Draft of a Report on the EDVAC’ (May 1945), composed by von Neumann, contained little engineering detail, in particular concerning electronic hardware (owing to restrictions in the U.S.). Turing's ‘Proposed Electronic Calculator’, on the other hand, supplied detailed circuit designs and specifications of hardware units, specimen programs in machine code, and even an estimate of the cost of building the machine (£11,200). ACE and EDVAC differed fundamentally from one another; for example, ACE employed distributed processing, while EDVAC had a centralised structure.

Turing saw that speed and memory were the keys to computing. Turing's colleague at NPL, Jim Wilkinson, observed that Turing ‘was obsessed with the idea of speed on the machine’ [Copeland 2005, p. 2]. Turing's design had much in common with today's RISC architectures and it called for a high-speed memory of roughly the same capacity as an early Macintosh computer (enormous by the standards of his day). Had Turing's ACE been built as planned it would have been in a different league from the other early computers. However, progress on Turing's Automatic Computing Engine ran slowly, due to organisational difficulties at NPL, and in 1948 a ‘very fed up’ Turing (Robin Gandy's description, in interview with Copeland, 1995) left NPL for Newman's Computing Machine Laboratory at Manchester University. It was not until May 1950 that a small pilot model of the Automatic Computing Engine, built by Wilkinson, Edward Newman, Mike Woodger, and others, first executed a program. With an operating speed of 1 MHz, the Pilot Model ACE was for some time the fastest computer in the world.

Sales of DEUCE, the production version of the Pilot Model ACE, were buoyant — confounding the suggestion, made in 1946 by the Director of the NPL, Sir Charles Darwin, that ‘it is very possible that … one machine would suffice to solve all the problems that are demanded of it from the whole country’ [Copeland 2005, p. 4]. The fundamentals of Turing's ACE design were employed by Harry Huskey (at Wayne State University, Detroit) in the Bendix G15 computer (Huskey in interview with Copeland, 1998). The G15 was arguably the first personal computer; over 400 were sold worldwide. DEUCE and the G15 remained in use until about 1970. Another computer deriving from Turing's ACE design, the MOSAIC, played a role in Britain's air defences during the Cold War period; other derivatives include the Packard-Bell PB250 (1961). (More information about these early computers is given in [Copeland 2005].)

The Manchester Machine

The earliest general-purpose stored-program electronic digital computer to work was built in Newman's Computing Machine Laboratory at Manchester University. The Manchester ‘Baby’, as it became known, was constructed by the engineers F.C. Williams and Tom Kilburn, and performed its first calculation on 21 June 1948. The tiny program, stored on the face of a cathode ray tube, was just seventeen instructions long. A much enlarged version of the machine, with a programming system designed by Turing, became the world's first commercially available computer, the Ferranti Mark I. The first to be completed was installed at Manchester University in February 1951; in all about ten were sold, in Britain, Canada, Holland and Italy.

The fundamental logico-mathematical contributions by Turing and Newman to the triumph at Manchester have been neglected, and the Manchester machine is nowadays remembered as the work of Williams and Kilburn. Indeed, Newman's role in the development of computers has never been sufficiently emphasised (due perhaps to his thoroughly self-effacing way of relating the relevant events).

It was Newman who, in a lecture in Cambridge in 1935, introduced Turing to the concept that led directly to the Turing machine: Newman defined a constructive process as one that a machine can carry out (Newman in interview with Evans, op. cit.). As a result of his knowledge of Turing's work, Newman became interested in the possibilities of computing machinery in, as he put it, ‘a rather theoretical way’. It was not until Newman joined GC&CS in 1942 that his interest in computing machinery suddenly became practical, with his realisation that the attack on Tunny could be mechanised. During the building of Colossus, Newman tried to interest Flowers in Turing's 1936 paper — birthplace of the stored-program concept - but Flowers did not make much of Turing's arcane notation. There is no doubt that by 1943, Newman had firmly in mind the idea of using electronic technology in order to construct a stored-program general-purpose digital computing machine.

In July of 1946 (the month in which the Royal Society approved Newman's application for funds to found the Computing Machine Laboratory), Freddie Williams, working at the Telecommunications Research Establishment, Malvern, began the series of experiments on cathode ray tube storage that was to lead to the Williams tube memory. Williams, until then a radar engineer, explains how it was that he came to be working on the problem of computer memory:

[O]nce [the German Armies] collapsed … nobody was going to care a toss about radar, and people like me … were going to be in the soup unless we found something else to do. And computers were in the air. Knowing absolutely nothing about them I latched onto the problem of storage and tackled that. (Quoted in Bennett 1976.)

sumber:
http://plato.stanford.edu/entries/computing-history/

Electromechanical versus Electronic Computation (The Modern History of Computing)

Electromechanical versus Electronic Computation

With some exceptions — including Babbage's purely mechanical engines, and the finger-powered National Accounting Machine - early digital computing machines were electromechanical. That is to say, their basic components were small, electrically-driven, mechanical switches called ‘relays’. These operate relatively slowly, whereas the basic components of an electronic computer — originally vacuum tubes (valves) — have no moving parts save electrons and so operate extremely fast. Electromechanical digital computing machines were built before and during the second world war by (among others) Howard Aiken at Harvard University, George Stibitz at Bell Telephone Laboratories, Turing at Princeton University and Bletchley Park, and Konrad Zuse in Berlin. To Zuse belongs the honour of having built the first working general-purpose program-controlled digital computer. This machine, later called the Z3, was functioning in 1941. (A program-controlled computer, as opposed to a stored-program computer, is set up for a new task by re-routing wires, by means of plugs etc.)

Relays were too slow and unreliable a medium for large-scale general-purpose digital computation (although Aiken made a valiant effort). It was the development of high-speed digital techniques using vacuum tubes that made the modern computer possible.

The earliest extensive use of vacuum tubes for digital data-processing appears to have been by the engineer Thomas Flowers, working in London at the British Post Office Research Station at Dollis Hill. Electronic equipment designed by Flowers in 1934, for controlling the connections between telephone exchanges, went into operation in 1939, and involved between three and four thousand vacuum tubes running continuously. In 1938–1939 Flowers worked on an experimental electronic digital data-processing system, involving a high-speed data store. Flowers' aim, achieved after the war, was that electronic equipment should replace existing, less reliable, systems built from relays and used in telephone exchanges. Flowers did not investigate the idea of using electronic equipment for numerical calculation, but has remarked that at the outbreak of war with Germany in 1939 he was possibly the only person in Britain who realized that vacuum tubes could be used on a large scale for high-speed digital computation. (See Copeland 2006 for m more information on Flowers' work.)

Atanasoff

The earliest comparable use of vacuum tubes in the U.S. seems to have been by John Atanasoff at what was then Iowa State College (now University). During the period 1937–1942 Atanasoff developed techniques for using vacuum tubes to perform numerical calculations digitally. In 1939, with the assistance of his student Clifford Berry, Atanasoff began building what is sometimes called the Atanasoff-Berry Computer, or ABC, a small-scale special-purpose electronic digital machine for the solution of systems of linear algebraic equations. The machine contained approximately 300 vacuum tubes. Although the electronic part of the machine functioned successfully, the computer as a whole never worked reliably, errors being introduced by the unsatisfactory binary card-reader. Work was discontinued in 1942 when Atanasoff left Iowa State.

Colossus

The first fully functioning electronic digital computer was Colossus, used by the Bletchley Park cryptanalysts from February 1944.

From very early in the war the Government Code and Cypher School (GC&CS) was successfully deciphering German radio communications encoded by means of the Enigma system, and by early 1942 about 39,000 intercepted messages were being decoded each month, thanks to electromechanical machines known as ‘bombes’. These were designed by Turing and Gordon Welchman (building on earlier work by Polish cryptanalysts).

During the second half of 1941, messages encoded by means of a totally different method began to be intercepted. This new cipher machine, code-named ‘Tunny’ by Bletchley Park, was broken in April 1942 and current traffic was read for the first time in July of that year. Based on binary teleprinter code, Tunny was used in preference to Morse-based Enigma for the encryption of high-level signals, for example messages from Hitler and members of the German High Command.

The need to decipher this vital intelligence as rapidly as possible led Max Newman to propose in November 1942 (shortly after his recruitment to GC&CS from Cambridge University) that key parts of the decryption process be automated, by means of high-speed electronic counting devices. The first machine designed and built to Newman's specification, known as the Heath Robinson, was relay-based with electronic circuits for counting. (The electronic counters were designed by C.E. Wynn-Williams, who had been using thyratron tubes in counting circuits at the Cavendish Laboratory, Cambridge, since 1932 [Wynn-Williams 1932].) Installed in June 1943, Heath Robinson was unreliable and slow, and its high-speed paper tapes were continually breaking, but it proved the worth of Newman's idea. Flowers recommended that an all-electronic machine be built instead, but he received no official encouragement from GC&CS. Working independently at the Post Office Research Station at Dollis Hill, Flowers quietly got on with constructing the world's first large-scale programmable electronic digital computer. Colossus I was delivered to Bletchley Park in January 1943.

By the end of the war there were ten Colossi working round the clock at Bletchley Park. From a cryptanalytic viewpoint, a major difference between the prototype Colossus I and the later machines was the addition of the so-called Special Attachment, following a key discovery by cryptanalysts Donald Michie and Jack Good. This broadened the function of Colossus from ‘wheel setting’ — i.e., determining the settings of the encoding wheels of the Tunny machine for a particular message, given the ‘patterns’ of the wheels — to ‘wheel breaking’, i.e., determining the wheel patterns themselves. The wheel patterns were eventually changed daily by the Germans on each of the numerous links between the German Army High Command and Army Group commanders in the field. By 1945 there were as many 30 links in total. About ten of these were broken and read regularly.

sumber:
http://plato.stanford.edu/entries/computing-history/

Analog computers (The Modern History of Computing)

Analog computers

The earliest computing machines in wide use were not digital but analog. In analog representation, properties of the representational medium ape (or reflect or model) properties of the represented state-of-affairs. (In obvious contrast, the strings of binary digits employed in digital representation do not represent by means of possessing some physical property — such as length — whose magnitude varies in proportion to the magnitude of the property that is being represented.) Analog representations form a diverse class. Some examples: the longer a line on a road map, the longer the road that the line represents; the greater the number of clear plastic squares in an architect's model, the greater the number of windows in the building represented; the higher the pitch of an acoustic depth meter, the shallower the water. In analog computers, numerical quantities are represented by, for example, the angle of rotation of a shaft or a difference in electrical potential. Thus the output voltage of the machine at a time might represent the momentary speed of the object being modelled.

As the case of the architect's model makes plain, analog representation may be discrete in nature (there is no such thing as a fractional number of windows). Among computer scientists, the term ‘analog’ is sometimes used narrowly, to indicate representation of one continuously-valued quantity by another (e.g., speed by voltage). As Brian Cantwell Smith has remarked:

‘Analog’ should … be a predicate on a representation whose structure corresponds to that of which it represents … That continuous representations should historically have come to be called analog presumably betrays the recognition that, at the levels at which it matters to us, the world is more foundationally continuous than it is discrete. (Smith [1991], p. 271)
James Thomson, brother of Lord Kelvin, invented the mechanical wheel-and-disc integrator that became the foundation of analog computation (Thomson [1876]). The two brothers constructed a device for computing the integral of the product of two given functions, and Kelvin described (although did not construct) general-purpose analog machines for integrating linear differential equations of any order and for solving simultaneous linear equations. Kelvin's most successful analog computer was his tide predicting machine, which remained in use at the port of Liverpool until the 1960s. Mechanical analog devices based on the wheel-and-disc integrator were in use during World War I for gunnery calculations. Following the war, the design of the integrator was considerably improved by Hannibal Ford (Ford [1919]).

Stanley Fifer reports that the first semi-automatic mechanical analog computer was built in England by the Manchester firm of Metropolitan Vickers prior to 1930 (Fifer [1961], p. 29); however, I have so far been unable to verify this claim. In 1931, Vannevar Bush, working at MIT, built the differential analyser, the first large-scale automatic general-purpose mechanical analog computer. Bush's design was based on the wheel and disc integrator. Soon copies of his machine were in use around the world (including, at Cambridge and Manchester Universities in England, differential analysers built out of kit-set Meccano, the once popular engineering toy).

It required a skilled mechanic equipped with a lead hammer to set up Bush's mechanical differential analyser for each new job. Subsequently, Bush and his colleagues replaced the wheel-and-disc integrators and other mechanical components by electromechanical, and finally by electronic, devices.

A differential analyser may be conceptualised as a collection of ‘black boxes’ connected together in such a way as to allow considerable feedback. Each box performs a fundamental process, for example addition, multiplication of a variable by a constant, and integration. In setting up the machine for a given task, boxes are connected together so that the desired set of fundamental processes is executed. In the case of electrical machines, this was done typically by plugging wires into sockets on a patch panel (computing machines whose function is determined in this way are referred to as ‘program-controlled’).

Since all the boxes work in parallel, an electronic differential analyser solves sets of equations very quickly. Against this has to be set the cost of massaging the problem to be solved into the form demanded by the analog machine, and of setting up the hardware to perform the desired computation. A major drawback of analog computation is the higher cost, relative to digital machines, of an increase in precision. During the 1960s and 1970s, there was considerable interest in ‘hybrid’ machines, where an analog section is controlled by and programmed via a digital section. However, such machines are now a rarity.

sumber:
http://plato.stanford.edu/entries/computing-history/

The Modern History of Computing

Historically, computers were human clerks who calculated in accordance with effective methods. These human computers did the sorts of calculation nowadays carried out by electronic computers, and many thousands of them were employed in commerce, government, and research establishments. The term computing machine, used increasingly from the 1920s, refers to any machine that does the work of a human computer, i.e., any machine that calculates in accordance with effective methods. During the late 1940s and early 1950s, with the advent of electronic computing machines, the phrase ‘computing machine’ gradually gave way simply to ‘computer’, initially usually with the prefix ‘electronic’ or ‘digital’. This entry surveys the history of these machines.

Babbage

Charles Babbage was Lucasian Professor of Mathematics at Cambridge University from 1828 to 1839 (a post formerly held by Isaac Newton). Babbage's proposed Difference Engine was a special-purpose digital computing machine for the automatic production of mathematical tables (such as logarithm tables, tide tables, and astronomical tables). The Difference Engine consisted entirely of mechanical components — brass gear wheels, rods, ratchets, pinions, etc. Numbers were represented in the decimal system by the positions of 10-toothed metal wheels mounted in columns. Babbage exhibited a small working model in 1822. He never completed the full-scale machine that he had designed but did complete several fragments. The largest — one ninth of the complete calculator — is on display in the London Science Museum. Baabage used it to perform serious computational work, calculating various mathematical tables. In 1990, Babbage's Difference Engine No. 2 was finally built from Babbage's designs and is also on display at the London Science Museum.

The Swedes Georg and Edvard Scheutz (father and son) constructed a modified version of Babbage's Difference Engine. Three were made, a prototype and two commercial models, one of these being sold to an observatory in Albany, New York, and the other to the Registrar-General's office in London, where it calculated and printed actuarial tables.

Babbage's proposed Analytical Engine, considerably more ambitious than the Difference Engine, was to have been a general-purpose mechanical digital computer. The Analytical Engine was to have had a memory store and a central processing unit (or ‘mill’) and would have been able to select from among alternative actions consequent upon the outcome of its previous actions (a facility nowadays known as conditional branching). The behaviour of the Analytical Engine would have been controlled by a program of instructions contained on punched cards connected together with ribbons (an idea that Babbage had adopted from the Jacquard weaving loom). Babbage emphasised the generality of the Analytical Engine, saying ‘the conditions which enable a finite machine to make calculations of unlimited extent are fulfilled in the Analytical Engine’ (Babbage [1994], p. 97).

Babbage worked closely with Ada Lovelace, daughter of the poet Byron, after whom the modern programming language ADA is named. Lovelace foresaw the possibility of using the Analytical Engine for non-numeric computation, suggesting that the Engine might even be capable of composing elaborate pieces of music.

A large model of the Analytical Engine was under construction at the time of Babbage's death in 1871 but a full-scale version was never built. Babbage's idea of a general-purpose calculating engine was never forgotten, especially at Cambridge, and was on occasion a lively topic of mealtime discussion at the war-time headquarters of the Government Code and Cypher School, Bletchley Park, Buckinghamshire, birthplace of the electronic digital computer.

http://plato.stanford.edu/entries/computing-history/

Why modern computing will kill traditional storage

"This company is dead. ... You know why? Fiber optics. New technologies. Obsolescence. We're dead alright. We're just not broke. And you know the surest way to go broke? Keep getting an increasing share of a shrinking market. Down the tubes. Slow but sure. You know, at one time there must've been dozens of companies making buggy whips. And I'll bet the last company around was the one that made the best goddamn buggy whip you ever saw. Now how would you have liked to have been a stockholder in that company?"

-Danny DeVito (as Lawrence Garfield) in Other People's Money (1991)

At the end of the day, IT can be boiled down to three things: computing, storage and networking. Major revolutions in IT (e.g. the move to client server, the ongoing move to cloud) require major changes in all three areas.

It should be no surprise that computing has changed dramatically over the past 10 years. Just as no one would invest in a buggywhip company, you would be hard-pressed to find anyone interested in investing in a company that manufactured big iron computing systems (mainframes, supercomputers, and the like). This is obviously not because the need for reliable, powerful, centralized computing power has decreased. Rather, people have realized that computing became more robust, more reliable, more manageable and more economical as the following transformations occurred:

-proprietary software systems were replaced by open technologies like Linux and the rest of the LAMP stack;

-dedicated, single-use systems were replaced by virtualized architectures, which let multiple apps run on the same computer or let a single application run on multiple computers; and,

- large, monolithic, scale-up architectures were replaced by scale-out architectures, which let you build power by combining large numbers of redundant, small elements.


In other words, now computing is done by treating it as a scale-out, virtualized, commoditized and centrally-managed pool. An organization can own its own pool (a private cloud) or rent space in a pool someone else owns (a public cloud). In either case, the pool approach works and people are diving in.




Of course, if computing is going this way, storage and networking need to go this way as well. From an architectural standpoint, storage needs to support the new computing paradigm. It doesn't do you much good to move your applications around dynamically to take advantage of any spare CPU cycle if the application data is still locked inside an expensive, inflexible box. It's not surprising that many are now citing storage as the Achilles heel of true data center virtualization. The situation gets even worse when one considers the challenges of deploying hybrid clouds, when the ease of moving virtual machines between data centers runs smack against the challenges of moving terabytes and petabytes of application data economically and efficiently between disparate data centers over (relatively) low bandwidth connections.




Storage needs to do much more than just support the new computing paradigm. Inevitably, storage must begin to LOOK much more like computing: scale-out, open source, commoditized, virtualized and present in the cloud.

However, the cloud movement will demand much more fundamental changes. Not only must storage be delivered in increments (i.e. scale-out), it also must be delivered in a way that untethers the fundamental storage functions from any particular hardware or from a particular vendor. You can't define what storage hardware will be available in the cloud. Instead, storage must be treated as a software problem -- with a software solution.

sumber:
http://blogs.computerworld.com/18372/why_modern_computing_will_kill_traditional_storage

Komputasi Modern: “Ubiquitous Computing”




“Ubiquitous” diambil dari kata “Ubicomp” yang dapat diartikan sebagai metode yang bertujuan menyediakan serangkaian komputer bagi lingkungan fisik pemakainya dengan tingkat efektifitas yang tinggi namun dengan tingkat visibilitas serendah mungkin.

sedikit penjelasan dari statemen di atas adalah bahwa dalam ubiquitous computing itu tidak terbatas hanya pada sebuah PC, notebook, ataupun PDA, melainkan berbagai macam alat yang memiliki sifat demikian natural, sehingga orang-orang yang sedang menggunakan nya (ubicomp device) tidak merasakan bahwa mereka tengah menggunakan/mengakses sebuah komputer.

berikut contoh dan wujud dari ubiquitous computing :

1. Suatu ketika hidup seorang engineer di sebuah perusahaan yang bergerak di bidang teknologi. Dia berangkat kerja dengan mobilnya melewati jalan tol modern tanpa penjaga pintu tol. Mobil sang engineer telah dilengkapi dengan sebuah badge pintar berisi microchip yang secara otomatis akan memancarkan identitas mobil tersebut pada serangkaian sensor saat melewati pintu tol seperti tampak pada gambar 1. Pembayaran jalan tol akan didebet langsung dari rekeningnya setiap minggunya sesuai data yang di-update setiap mobilnya melewati pintu tol dan disimpan dalam komputer pengelola jalan tol.

2. Saat mobilnya mendekati pintu kantor, sensor pada gerbang pagar kantor mengenali kendaraan tersebut berkat pemancar lain yang terdapat di mobil tersebut dan secara otomatis membuka gerbang.

3. Pada kartu pegawai sang engineer terpasang device pemancar yang secara otomatis akan mengaktifkan serangkaian sensor pada saat ia memasuki kantor. Pintu ruang kerjanya akan terbuka secara otomatis, pendingin ruangan akan dinyalakan sesuai dengan suhu yang nyaman baginya dan mesin pembuat kopi pun menyiapkan minuman bagi sang engineer.

4. Meja kerja sang engineer dilapisi sebuah pad lembut yang mempunyai berbagai fungsi. Saat ia meletakkan telepon selulernya di pad tersebut, secara otomatis baterai ponsel tersebut akan diisi. Jadwal hari tersebut yang sudah tersimpan dalam ponsel akan ditransfer secara otomatis ke dalam komputer dengan bantuan pad tersebut sebagai alat inputnya. Misalkan di hari tersebut ia telah mengagendakan rapat bersama para stafnya maka komputer secara otomatis akan memberitahukan kepada seluruh peserta rapat bahwa rapat akan segera dimulai.

Contoh di atas tidak memerlukan sebuah penemuan teknologi revolusioner, tidak ada algoritma kecerdasan buatan yang rumit atau alat-alat dengan teknologi seperti pada film-film fiksi ilmiah yang tidak terjangkau oleh kenyataan. Charger pad untuk telepon seluler seperti pada gambar 2 misalnya, saat ini merupakan sebuah alat yang telah diproduksi secara komersial. Apabila charger tersebut diberi suatu fitur yang dapat mentransfer data dari telepon seluler ke komputer maka sempurnalah fungsinya sebagai sebuah contoh ubicomp device. Dengan teknologi mikro dan nano saat ini satu buah kartu pegawai yang kecil dan pipih dengan beberapa microchip dapat berfungsi sebagai pemancar sekaligus media penyimpanan data. Reaksi alat-alat semacam pad, pendingin ruangan, pintu otomatis, dan sebagainya dapat diatur dengan serangkaian perintah IF-THEN yang sederhana. Untuk komunikasi antar alat atau dari pemancar menuju sensor hanya dibutuhkan teknologi wireless biasa yang saat ini pun sudah umum digunakan.


aspek-aspek yang mendukung pengembangan dari ubiquitous computing:


* Natural Interfaces: penggunaan aspek-aspek alami sebagai cara untuk memanipulasi data, contohnya teknologi semacam voice recognizer ataupun pen computing.

* Context Aware Computing: memandang suatu proses komputasi tidak hanya menitikberatkan perhatian pada satu buah obyek yang menjadi fokus utama dari proses tersebut tetapi juga pada aspek di sekitar obyek tersebut, contohnya komputasi konvensional.

* Micro-Nano Technology: Teknologi yang memanfaatkan berbagai microchip dalam ukuran luar biasa kecil semacam T-Engine ataupun Radio Frequency Identification (RFID) diaplikasikan dalam kehidupan sehari-hari dalam bentuk smart card atau tag. Contohnya, seseorang yang mempunyai karcis bis berlangganan dalam bentuk kartu cukup melewatkan kartunya tersebut di atas sensor saat masuk dan keluar dari bis setelah itu saldonya akan langsung didebet sesuai jarak yang dia tempuh.

kesimpulannnya adalah

Ubiquitous Computing atau Ubicomp menjadi inspirasi dari pengembangan komputasi yang bersifat “off the desktop”, di mana interaksi antara manusia dengan komputer bersifat natural dan secara perlahan meninggalkan paradigma keyboard/mouse/display dari generasi PC. Kita memahami bahwa jika seorang manusia bergerak, berbicara atau menulis hal tersebut akan diterima sebagai input dari suatu bentuk komunikasi oleh manusia lainnya. Ubicomp menggunakan konsep yang sama, yaitu menggunakan gerakan, pembicaraan, ataupun tulisan tadi sebagai bentuk input baik secara eksplisit maupun implisit ke komputer. Salah satu efek positif dari ubicomp adalah orang-orang yang tidak mempunyai keterampilan menggunakan komputer dan juga orang-orang dengan kekurangan fisik (cacat) dapat tetap menggunakan komputer untuk segala keperluan.

sumber:
http://hellisfun.wordpress.com/2011/05/04/jurnal-komputasi-modern-ubiquitous-computing/