To use all functions of this page, please activate cookies in your browser.
With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
In futurism, computronium refers to a hypothetical material engineered to maximize its use as a computing substrate. While futurists usually use it to refer to hypothetical materials engineered on the molecular, atomic, or subatomic level by some advanced form of nanotechnology, the term can also be applied both to contemporary computing materials, and to constructs of theoretical physics that are unlikely to ever be practical to build.
Many futurists speculate about futures where demand for computing power grows to the point where very large amounts of computronium are desired. Examples of applications include Jupiter Brains, planet-sized constructs made of computronium, and Matrioshka Brains, concentric Dyson spheres designed to extract all possible energy from the host star for use towards computation.
Additional recommended knowledge
Conventional integrated circuits
Contemporary integrated circuits can be considered a form of computronium. The density and speed of integrated circuit computing elements has increased roughly exponentially for a period of several decades, following a trend described by Moore's Law. While it is generally accepted that this exponential improvement trend will end, it is unclear exactly how dense and fast integrated circuits will get by the time this point is reached. Working devices have been demonstrated that were fabricated with a MOSFET channel length of 6.3 nanometres using conventional semiconductor materials, and devices have been built that used carbon nanotubes as MOSFET gates, giving a channel length of approximately 1 nanometre.
The ultimate density and computing power of integrated circuits are limited primarily by power dissipation concerns.
Many futurists postulate some form of advanced nanotechnology able to mass-manufacture matter that is structured on a molecular or atomic level. This allows the creation of new types of computing device, that allow computronium that is even better optimized for several of the design constraints outlined above. In particular, computronium produced by molecular nanotechnology would be far lighter and more compact for a given computational or data storage capacity than computronium produced by bulk fabrication methods. Additionally, depending on implementation specifics, it is quite possible that molecular-scale devices would be able to perform computation more quickly than bulk-fabricated devices, though in practice this will likely fall back to heat limits, with the final speed relationship being unclear.
Two types of molecular-scale computing substrate are potentially practical: those based on electrical signals, and those that use the position and motion of mechanical components to represent data and to perform computation. Electrical computation would be performed in a manner reasonably similar to the implementation of circuits today. Arguably it has already been demonstrated, in the form of field-effect transistors that use carbon nanotubes as gates. Carbon nanotubes also function acceptably as wires for conduction of signals in nanoscale circuits, providing a reasonable basis for the construction of molecular-level circuits. The drawback to using electrical signalling in molecular-scale circuits is the same as that for bulk-fabricated electronic circuits: heat dissipation. Thus, it is reasonable to conclude that computronium based on electrical devices fabricated on a molecular scale would have only a modest improvement in computing speed over the best devices that bulk-fabrication techniques have to offer. An improvement would still be present, if for no other reason than improved tailoring of surface to volume ratio and improved heat conduction in an appropriately constructed substrate.
Molecular-scale computronium based on mechanical devices could function by mechanisms similar to the rod logic proposed by Eric Drexler. While these too suffer from the problem of thermally induced noise, the mechanisms by which energy is dissipated are sufficiently different from the mechanisms for electrical devices that there is hope that mechanical devices may suffer somewhat less from the heating problems that afflict electrical devices.
DNA computing might be another candidate for molecular computation, or at the very least data storage.
Lastly, the option of optical computing mechanisms exists for molecular-scale computing devices. This is not listed among the potentially practical options for two reasons. Firstly, while the size of the active optical components may be very small, the size of the communications network must be comparable to the wavelength of the light used as a carrier. While waveguide techniques can ameliorate this constraint to some degree, they fall far short of producing the device densities possible using electrical or mechanical methods. Secondly, there is a very strong relation between the speed of computation, the wavelength of the light used as a carrier, and the amount of energy required to store a bit of information. This arises from the fact that, in order for the frequency of a light pulse to be well-defined, the duration of the pulse must be comparable to (or longer than) the frequency of oscillation of the photons acting as the carrier. For visible light, this is on the order of femtoseconds, while it is in principle possible for electronic devices to operate much more quickly. Speeding up this signal transmission time requires using a shorter wavelength, which involves more energy per photon. Even disregarding energy concerns, an upper limit to photon energy of at most a few eV is imposed by the requirement that the carrier photons not destroy molecular bonds in the substrate. In practice, an engineered optical computer would use photon energies in the 0.1 eV range or lower, for better ease of tailoring molecular energy levels to interact with the carrier photons. Lastly, the energy required to encode one bit of information is approximately that needed to produce 2-4 photons of the carrier frequency. This energy is relatively large compared to the amount needed to encode information using mechanical devices, though it is still comparable both to the amount needed to cause a state change in an electrical device, and the magnitude of thermal noise at room temperature. In summary, molecular-scale computronium based on optical principles would be quite mass-efficient, but quite bulky and slow compared to electronic or mechanical implementations.
Excited atoms and nuclei
The discussion of computronium based on nanotechnology assumes that the minimum scale for storage of one bit of information is one atom or one electron. This constraint turns out not to hold: as it is possible to excite the electrons of an atom to any of a very large number of states, so it is possible to store more than one bit of information using a single atom. Manipulation of atomic structure in this manner is sometimes referred to as picotechnology by futurists, though the name is something of a misnomer, as most readily-accessible excited atomic states useful for data storage produce orbitals that are much larger than even nanometre scale.
The most promising means of using individual atoms for data storage found to date is the technique of using light impulses at carefully tuned frequencies to perturb the state of Rydberg atoms. Data storage of several hundred bits in a single atom has been demonstrated by this method. Similarly, it is in principle possible to manipulate the state of electrons in the Fermi gas within a bounded conductor (such as a quantum dot) to store information in a similar manner, though this involves the use of many atoms. In both cases, the apparatus must be kept very cold, so that thermal effects don't perturb the data storage state (energy levels are very finely spaced). In the case of Rydberg atom data storage, the atoms used for storage must additionally be suspended in some form of atom trap to prevent interactions with normal matter from perturbing the excited atoms' states.
Due to susceptibility to thermal noise and to apparatus considerations, this is not expected to be a widely-used method of data storage, except possibly in situations requiring low mass above all other design considerations. Additionally, due to the very small spacings in energy levels, transitions between energy levels tend to be very slow, resulting in very slow data access compared to other forms of data storage. It is unclear how, if at all, atoms excited in this manner would be used as a computing mechanism, as opposed to a data storage mechanism.
In principle, similar techniques can be used to store information encoded as excited states of atomic nuclei. Manipulation of the atomic nucleus in this manner is sometimes referred to as femtotechnology by futurists. It is unlikely that this mechanism will ever be practical, due to the very high energies involved in nuclear state transitions (requiring equipment capable of processing gamma rays efficiently and without degradation), and to the behavior of the strong nuclear force making excited nuclear states less stable, and forcing there to be a finite number of excited states below the binding energy of the atomic nucleus (unlike the in principle infinite number of bound states in an atom).
Limits to computation
There are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy:
Several methods have been proposed for producing computing devices or data storage devices that approach physical and practical limits:
All of these methods are hypothetical, and none of them are expected to be practical in the near future.
|This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Computronium". A list of authors is available in Wikipedia.|