Hello, and good day, or evening as the case may be! I am not going to finish this post tonight but I will start it, cause I got storage, binary, hex, nibbles, bits, and bytes on my mind. We are leaving PAIR Networks, and moving to a new web host, as we spin up our own, and build our server, and load the software, and find a co-location spot in a data center. OK after more research, I am coming up with this, so we can be clear and for the record. We learned a lot of this at Athena Learning Center many moons ago, while studying for the Microsoft Networking exams:

**Lower Than Binary** – To go “lower than binary”, we start getting into the voltages and transistors and resistors inside computer chips and storage. If you would like to read more, and you should, then Go to this link on Stack Exchange. The Flip Flop came from this link.

**Integrated Circuit** – Wikipedia “An **integrated circuit** or **monolithic integrated circuit** (also referred to as an **IC**, a **chip**, or a **microchip**) is a set of electronic circuits on one small flat piece (or “chip”) of semiconductor material that is normally silicon. The integration of large numbers of tiny MOS transistors into a small chip results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete electronic components. The IC’s mass production capability, reliability, and building-block approach to circuit design has ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the small size and low cost of ICs. Integrated circuits were made practical by technological advancements in metal–oxide–silicon (MOS) semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more MOS transistors on chips of the same size – a modern chip may have many billions of MOS transistors in an area the size of a human fingernail. These advances, roughly following Moore’s law, make computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have two main advantages over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC’s components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only commercially viable when high production volumes are anticipated.”

**Flip Flop **– Wikipedia, “In electronics, a **flip-flop** or **latch** is a circuit that has two stable states and can be used to store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems. Flip-flops and latches are used as data storage elements. A flip-flop is a device which stores a single *bit* (binary digit) of data; one of its two states represents a “one” and the other represents a “zero”. Such data storage can be used for storage of *state*, and such a circuit is described as sequential logic in electronics. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. Flip-flops can be either level-triggered (asynchronous, transparent or opaque) or edge-triggered (synchronous, or clocked). The term flip-flop has historically referred generically to both level-triggered and edge-triggered circuits that store a single bit of data using gates. Recently, some authors reserve the term *flip-flop* exclusively for discussing clocked circuits; the simple ones are commonly called *transparent latches*.^{[1][2]} Using this terminology, a level-sensitive flip-flop is called a transparent latch, whereas an edge-triggered flip-flop is simply called a flip-flop. Using either terminology, the term “flip-flop” refers to a device that stores a single bit of data, but the term “latch” may also refer to a device that stores any number of bits of data using a single trigger. The terms “edge-triggered”, and “level-triggered” may be used to avoid ambiguity.^{[3]} When a level-triggered latch is enabled it becomes transparent, but an edge-triggered flip-flop’s output only changes on a single type (positive going or negative going) of clock edge.”

**Binary Number** – Wikipedia: ” In mathematics and digital electronics, a **binary number** is a number expressed in the **base-2 numeral system** or **binary numeral system**, which uses only two symbols: typically “0” (zero) and “1” (one). The base-2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used by almost all modern computers and computer-based devices. Any number can be represented by a sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667:

1 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 1 |

| | ― | | | ― | ― | | | | | ― | | | | |

☒ | ☐ | ☒ | ☐ | ☐ | ☒ | ☒ | ☐ | ☒ | ☒ |

y | n | y | n | n | y | y | n | y | y |

The numeric value represented in each case is dependent upon the value assigned to each symbol. In the earlier days of computing, switches, punched holes and punched paper tapes were used to represent binary values.^{[30]} In a modern computer, the numeric values may be represented by two different voltages; on a magnetic disk, magnetic polarities may be used. A “positive”, “yes“, or “on” state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.”

**Binary Code** – Wikipedia, “A **binary code** represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often “0” and “1” from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items. In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case *a*, if represented by the bit string `01100001`

(as it is in the standard ASCII code), can also be represented as the decimal number “97”.”

**Hexadecimal **– Wikipedia, “In mathematics and computing, **hexadecimal** (also **base 16**, or **hex**) is a positional system that represents numbers using a base of 16. Unlike the common way of representing numbers with ten symbols, it uses sixteen distinct symbols, most often the symbols “0”–”9″ to represent values zero to nine, and “A”–”F” (or alternatively “a”–”f”) to represent values ten to fifteen. Hexadecimal numerals are widely used by computer system designers and programmers, as they provide a human-friendly representation of binary-coded values. Each hexadecimal digit represents four binary digits, also known as a nibble, which is half a byte. For example, a single byte can have values ranging from 00000000 to 11111111 in binary form, which can be conveniently represented as 00 to FF in hexadecimal. In mathematics, a subscript is typically used to specify the base, also known as the radix. For example, the decimal value 10,995 would be expressed in hexadecimal as 2AF3_{16}. In programming, a number of notations are used to support hexadecimal representation, usually involving a prefix or suffix. The prefix `0x`

is used in C and related languages, which would denote this value by `0x2AF3`

. Hexadecimal is used in the transfer encoding **Base16**, in which each byte of the plaintext is broken into two 4-bit values and represented by two hexadecimal digits.”

**Nibble** – Wikipedia, “In computing, a **nibble**^{[1]} (occasionally **nybble** or **nyble** to match the spelling of byte) is a four-bit aggregation,^{[1][2][3]} or half an octet. It is also known as **half-byte**^{[4]} or **tetrade**.^{[5][6]} In a networking or telecommunication context, the nibble is often called a **semi-octet**,^{[7]} **quadbit**,^{[8]} or **quartet**.^{[9][10]} A nibble has sixteen (2^{4}) possible values. A nibble can be represented by a single hexadecimal digit and called a **hex digit**.^{[11]} A full byte (octet) is represented by two hexadecimal digits; therefore, it is common to display a byte of information as two nibbles. Sometimes the set of all 256 byte values is represented as a 16×16 table, which gives easily readable hexadecimal codes for each value. Four-bit computer architectures use groups of four bits as their fundamental unit. Such architectures were used in early microprocessors, pocket calculators and pocket computers. They continue to be used in some microcontrollers. In this context, 4-bit groups were sometimes also called *characters*^{[12]} rather than nibbles.^{[1]}“

**Bit** – Wikipedia, “The **bit** is a basic unit of information in information theory, computing, and digital communications. The name is a portmanteau of *binary digit*.^{[1]} In information theory, one bit is typically defined as the information entropy of a binary random variable that is 0 or 1 with equal probability,^{[2]} or the information that is gained when the value of such a variable becomes known.^{[3][4]} As a unit of information, the bit is also known as a *shannon*,^{[5]} named after Claude E. Shannon. As a **binary digit**, the bit represents a logical state, having only one of two values. It may be physically implemented with a two-state device. These values are most commonly represented as either *0*or*1*, but other representations such as *true*/*false*, *yes*/*no*, *+*/*−*, or *on*/*off* are common. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The symbol for the binary digit is either *bit* per recommendation by the IEC 80000-13:2008 standard, or the lowercase character *b*, as recommended by the IEEE 1541-2002 and IEEE Std 260.1-2004 standards. A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.”

**Byte** – Wikipedia, “The **byte** is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer^{[1][2]} and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used.^{[3][4][5][6]} The six-bit character code was an often used implementation in early encoding systems and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 24, 36, 48, or 60 bits, corresponding to 2, 4, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as *syllables*, before the term *byte* became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte—2 to the power 8 is 256.^{[7]} The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the eight-bit size.^{[8]} Modern architectures typically use 32- or 64-bit words, built of four or eight bytes. The unit symbol for the byte was designated as the upper-case letter *B* by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE)^{[9]} in contrast to the bit, whose IEEE symbol is a lower-case *b*. Internationally, the unit *octet*, symbol *o*, explicitly defines a sequence of eight bits, eliminating the ambiguity of the byte.^{[10][11]}

**Kilobyte** – Wikipedia, “The **kilobyte** is a multiple of the unit byte for digital information. The International System of Units (SI) defines the prefix *kilo* as 1000 (10^{3}); per this definition, one kilobyte is 1000 bytes.^{[1]} The internationally recommended unit symbol for the kilobyte is **kB**.^{[1]} In some areas of information technology, particularly in reference to digital memory capacity, *kilobyte* instead denotes 1024 (2^{10}) bytes. This arises from the powers-of-two sizing common to memory circuit design. In this context, the symbols **KB** and **K** are often used.^{[citation needed]}

**Megabyte** – The **megabyte** is a multiple of the unit byte for digital information. Its recommended unit symbol is **MB**. The unit prefix *mega* is a multiplier of 1000000 (10^{6}) in the International System of Units (SI).^{[1]} Therefore, one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities. However, in the computer and information technology fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1048576bytes (2^{20} B), a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes,^{[2]} in which this quantity is designated by the unit mebibyte (MiB). Less common is a convention that uses the megabyte to mean 1000×1024 (1024000) bytes.^{[2]}

**Gigabyte** – Wikipedia, “The **gigabyte** (/ˈɡɪɡəbaɪt, ˈdʒɪɡə-/)^{[1]} is a multiple of the unit byte for digital information. The prefix *giga* means 10^{9} in the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit symbol for the gigabyte is **GB**. This definition is used in all contexts of science, engineering, business, and many areas of computing, including hard drive, solid state drive, and tape capacities, as well as data transmission speeds. However, the term is also used in some fields of computer science and information technology to denote 1073741824 (1024^{3} or 2^{30}) bytes, particularly for sizes of RAM. The use of *gigabyte* may thus be ambiguous. Hard disk capacities as described and marketed by drive manufacturers using the standard metric definition of the gigabyte, but when a 400 GB drive’s capacity is displayed by, for example, Microsoft Windows, it is reported as 372 GB, using a binary interpretation. To address this ambiguity, the International System of Quantities standardizes the binary prefixes which denote a series of integer powers of 1024. With these prefixes, a memory module that is labeled as having the size “1GB” has one gibibyte (1GiB) of storage capacity. Using the ISQ definitions, the “372 GB” reported for the hard drive is actually 372 GiB (400 GB).”

**Terabyte **– You are only three measurements away from all the data in the world here, just keep scrolling down…amazing!! Wikipedia, “The **terabyte** is a multiple of the unit byte for digital information. The prefix *tera* represents the fourth power of 1000, and means 10^{12} in the International System of Units (SI), and therefore one terabyte is one trillion (short scale) bytes. The unit symbol for the terabyte is **TB**.1 TB = 1000000000000bytes = 10^{12}bytes = 1000gigabytes.1000 TB = 1 petabyte (PB) A related unit, the tebibyte (TiB), using a binary prefix, is equal to 1024^{4} bytes. One terabyte is about 0.9095 TiB. Despite the introduction of these standardized binary prefixes, the terabyte is still also commonly used in some computer operating systems, primarily Microsoft Windows, to denote 1099511627776 (1024^{4} or 2^{40}) bytes for disk drive capacity.^{[1][2]}“

**Petabyte** – Wikipedia, “A **petabyte** is 10^{15} bytes of digital information. The unit symbol for the petabyte is **PB**. The name is composed of the SI prefix peta- (P) composed with the non-SI unit of a byte.1 PB = 1000000000000000B = 10^{15}bytes = 1000terabytes1000 PB = 1 exabyte (EB) A related unit, the pebibyte (PiB), using a binary prefix, is equal to 1024^{5} bytes, which is more than 12% greater (2^{50} bytes = 1125899906842624bytes). Usage examples[edit]

Examples of the use of the petabyte to describe data sizes in different fields are:

- Telecommunications (capacity): The world’s effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of information in 1986, 471 petabytes in 1993, 2,200 petabytes in 2000, and 65,000 petabytes in 2007 (this is the informational equivalent to every person exchanging 6 newspapers per day).
^{[1]} - Telecommunications (usage): In 2008, AT&T transferred about 30 petabytes of data through its networks each day.
^{[2]}That number grew to 197 petabytes daily by March 2018.^{[3]} - Email: In May 2013, Microsoft announces that as part of their migration of Hotmail accounts to the new Outlook.com email service, they migrated over 150 petabytes of user data in six weeks.
^{[4]} - File sharing (centralized): At its 2012 closure of file storage services, Megaupload held ~28 petabytes of user uploaded data.
^{[5]} - File sharing (peer-to-peer): 2013 – BitTorrent Sync has transferred over 30 petabytes of data since its pre-alpha release in January 2013.
^{[6]} - National Library: The American Memory digital archive of public domain resources hosted by the United States Library of Congress contained 15 million digital objects in 2016, comprising over 7 petabytes of digital data.
^{[7]} - Video streaming: As of May 2013, Netflix had 3.14 petabytes of video “master copies”, which it compresses and converts into 100 different formats for streaming.
^{[8]} - Photos: As of January 2013, Facebook users had uploaded over 240 billion photos,
^{[9]}with 350 million new photos every day. For each uploaded photo, Facebook generates and stores four images of different sizes, which translated to a total of 960 billion images and an estimated 357 petabytes of storage.^{[10]} - Music: One petabyte of average MP3-encoded songs (for mobile, roughly one megabyte per minute), would require 2000 years to play.
^{[11]} - Steam, a digital distribution service, delivers over 16 petabytes of content to American users weekly.
^{[12]} - Physics: The experiments in the Large Hadron Collider produce about 15 petabytes of data per year, which are distributed over the Worldwide LHC Computing Grid.
^{[13]}In July 2012 it was revealed that CERN amassed about 200 petabytes of data from the more than 800 trillion collisions looking for the Higgs boson.^{[14]}The Large Hadron Collider is also able to produce 1 petabyte of data per second, but most of it is filtered out.^{[15]} - Neurology: It is estimated that the human brain‘s ability to store memories is equivalent to about 2.5 petabytes of binary data.
^{[16]}^{[17]} - Video: Uncompressed 1080p 30 fps HD RGB video (1920×1080 pixels / 3 bytes per pixel) running for a 100 years would amount to approximately 600 PB of data.
^{[citation needed]} - Sports: A petabyte’s worth of 1 GB flash drives lined up end to end would stretch across 92 football fields.
^{[18]}“

**Exabyte** – Wikipedia, ” The **exabyte** is a multiple of the unit byte for digital information. In the International System of Units (SI), the prefix *exa* indicates multiplication by the sixth power of 1000 (10^{18}). Therefore, one exabyte is one quintillion bytes (short scale). The unit symbol for the exabyte is **EB**.1 EB = 10^{18}bytes = 1000^{6}bytes = 1000000000000000000B = 1000 petabytes = 1millionterabytes = 1billiongigabytes.1000 EB = 1 zettabyte (ZB) A related unit, the exbibyte, using a binary prefix, is equal to 1024^{6} (=2^{60})bytes, about 15% larger.

Usage examples and size comparisons[edit]:

- A processor with a 64-bit address bus can address 16 exbibytes of memory,
^{[1]}which is over 18 exabytes. - The world’s technological capacity to store information grew from 2.6 (“optimally compressed”) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed) exabytes in 2007. This is equivalent to less than one CD (650 MB) per person in 1986 (539 MB per person), roughly four in 1993, 12 in 2000, and almost 61 in 2007. Piling up the imagined 404 billion CDs from 2007 would create a stack from the Earth to the Moon and a quarter of this distance beyond (with 1.2 mm thickness per CD).
^{[2]} - The world’s technological capacity to receive information through one-way broadcast networks was 432 exabytes of information in 1986, 715 exabytes in 1993, 1,200 exabytes in 2000, and 1,900 in 2007 (and with all the preceding examples assuming that those figures represent “optimally compressed” data).
^{[2]} - The world’s effective capacity to exchange information through two-way telecommunication networks was 0.281 exabytes of information in 1986, 0.471 in 1993, 2.2 in 2000, and 65 exabytes in 2007 (yet again, all such amounts listed are strictly working off the basis that the data was in an “optimally compressed” form).
^{[2]} - In 2004, the global monthly Internet traffic passed 1 exabyte for the first time. In January 2007, Bret Swanson of the Discovery Institute coined the term
*exaflood*for a supposedly impending flood of exabytes that would cause the Internet’s congestive collapse.^{[3]}^{[4]}Nevertheless, the global Internet traffic has continued its exponential growth, undisturbed, and as of March 2010, it was estimated at 21 exabytes per month.^{[5]} - The global data volume at the end of 2009 had reached 800 exabytes.
^{[citation needed]} - According to an International Data Corporation paper sponsored by EMC Corporation (now Dell EMC), 161 exabytes of data were created in 2006, “3 million times the amount of information contained in all the books ever written”, with the number expected to hit 988 exabytes in 2010.
^{[6]}^{[7]}^{[8]} - A gram of DNA can theoretically hold 455 exabytes.
^{[9]} - In 2014, DARPA‘s ARGUS-IS surveillance system could stream 1 exabyte of high-definition video per day.
^{[10]} - According to the CSIRO, in the next decade (the 2010s), astronomers expect to be processing 10 petabytes of data every hour from the Square Kilometre Array (SKA) telescope.
^{[11]}The array is thus expected to generate approximately one exabyte every four days of operation. According to IBM, the new SKA telescope initiative will generate over an exabyte of data every day. IBM is designing hardware to process this information.^{[12]} - According to the Digital Britain Report, 494 exabytes of data was transferred across the globe on June 15, 2009.
^{[13]} - Several filesystems use disk formats that support theoretical volume sizes of several exabytes, including Btrfs, XFS, ZFS, exFAT, NTFS, HFS Plus, and ReFS.
- The ext4 file system format supports volumes up to 1.1529215 exabytes in size, although the userspace tools cannot yet administer such filesystems.
- Oracle Corporation claimed the first exabyte tape library with the SL8500 and the T10000C tape drive in January 2011.
^{[14]} - 100 years of uncompressed video at 8K UHD (7680×4320 pixels), 120 frames per second and 16-bit per color channel (RGB) would amount to 72 exabytes.
^{[citation needed]}

**Zettabyte** – In 2020 they say there is 2.7 Zettabytes of data in all of the world combined!!! This guys says the world has 18 Zettabytes but not all of it is stored, whatever that means. Wikipedia, “The **zettabyte** is a multiple of the unit byte for digital information. The prefix *zetta* indicates multiplication by the seventh power of 1000 or 10^{21} in the International System of Units (SI). A zettabyte is one sextillion (one long scale trilliard) bytes.^{[1][2][3][4][5]} The unit symbol is **ZB**.1 ZB = 1000^{7}bytes = 10^{21}bytes = 1000000000000000000000bytes = 1000exabytes = 1millionpetabytes = 1billionterabytes = 1trilliongigabytes.1000 ZB = 1 yottabyte (YB). A related unit, the zebibyte (ZiB), using a binary prefix, is equal to 1024^{7} (=2^{70}) bytes (approximately 1.181 ZB).

- Between 1986 and 2007, the world’s technological capacity to receive information through one-way broadcast networks was 0.432 zettabytes of optimally compressed information in 1986, 0.715 ZB in 1993, 1.2 ZB in 2000, and 1.9 (optimally compressed) ZB in 2007, this being the informational equivalent to every person on Earth receiving 174 newspapers per day.
^{[9]}^{[10]} - In 2003, Mark Liberman had calculated the storage requirements for all human speech ever spoken at 42 zettabytes if digitized as 16 kHz 16-bit audio. He did this in response to a popular
^{[11]}^{[12]}^{[13]}expression that states “all words ever spoken by human beings” could be stored in approximately 5 exabytes of data. Liberman confessed that “maybe the authors [of the exabyte estimate] were thinking about text”.^{[14]} - In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS per research from the University of Southern California.
^{[15]} - In 2008, Americans alone consumed 3.6 zettabytes of information
^{[clarification needed]}per a 2009 study from the University of California, San Diego.^{[16]} - As of 2009, the entire World Wide Web was estimated to contain close to 500 exabytes, or half a zettabyte.
^{[17]} - In 2011 the International Data Corporation expected the “total amount of global data” to grow to 2.7 zettabytes during 2012, an increase of 48% from 2011.
^{[18]} - In 2012, Americans accessed already 6.9 zettabytes of data per a 2013 study.
^{[19]} - In 2013, one expert estimated that the “amount of data generated worldwide” would reach 4 zettabytes by the end of the year.
^{[20]} - In 2018, International Data Corporation (IDC) estimated the global datasphere has reached 33 zettabytes and is expected to reach 175 zettabytes by 2025.
^{[21]}

**Yottabyte** – Wikipedia, “The **yottabyte** is a multiple of the unit byte for digital information. The prefix *yotta* indicates multiplication by the eighth power of 1000 or 10^{24} in the International System of Units (SI), and therefore one yottabyte is one septillion (one long scale quadrillion) bytes. The unit symbol for the yottabyte is **YB**. The yottabyte, adopted in 1991, is the largest of the formally defined multiples of the byte.1 YB = 1000^{8}bytes = 10^{24}bytes = 1000000000000000000000000bytes = 1000zettabytes = 1trillionterabytes A related unit, the yobibyte (YiB), using a binary prefix, is equal to 1024^{8}bytes (approximately 1.209 YB).

Examples of use: In 2010, it was estimated that storing a yottabyte on terabyte-size disk drives would require one million city block-size data-centers, as big as the states of Delaware and Rhode Island combined.^{[1]} By late 2016, memory density had increased to the point where a yottabyte could be stored on SD cards occupying roughly twice the size of the Hindenburg^{[2]} (around 400 thousand cubic metres). The total amount of data that could be stored in the observable universe using each of the 10^{78} to 10^{82} atoms as single bits of information (using their spin for example) is between 1.25×10^{53} and 1.25×10^{57} yottabytes.^{[3]} Since the radius of Holmium is 233 pm, an atomic memory device could store one yottabyte in an area roughly the size of a nickel.^{[4]}

**Brontobyte** – 1000 Yottabytes, Wikipedia “Many personal, and sometimes facetious, proposals for additional metric prefixes have been formulated.^{[11][12]} The prefix *bronto*, as used in the term *brontobyte*, has been used to represent anything from 10^{15} to 10^{27} bytes, most often 10^{27}.^{[13][14][15][16][17]} The SI includes standardised prefixes for 10^{15} (peta), 10^{18} (exa), 10^{21} (zetta) and 10^{24} (yotta). In 2010, an online petition sought to establish hella as the SI prefix for 10^{27}, a movement that began on the campus of UC Davis.^{[18][19]} The prefix, which has since appeared in the San Francisco Chronicle, Daily Telegraph, *Wired* and some other scientific magazines, was recognised by Google, in a non-serious fashion, in May 2010.^{[20][21][22]} Ian Mills, president of the Consultative Committee on Units, considers the chances of official adoption to be remote.^{[23]} The prefix *geop* and term *geopbyte* has been used in the tech industry to refer to 10^{30} bytes following *brontobyte*.^{[13]} The ascending prefixes *tera* (1000^{4}), *peta* (1000^{5}), *exa* (1000^{6}), *zetta* (1000^{7}), and *yotta* (1000^{8}) are based on the Greek-derived numeric prefixes *tetra* (4), *penta* (5), *hexa* (6), *hepta* (7), and *octo* (8). In addition, the final letters of the alphabet, *z* and *y*, appear in the largest SI prefixes, *zetta* and *yotta*. Similarly, the descending prefixes *zepto* (1000^{−7}) and *yocto* (1000^{−8}) are derived from Latin/Greek *septem*/*hepta* (7) and *octo*/*oktô* (8) plus the initial letters *z* and *y*. The initial letters were changed because previously proposed ascending *hepta* was already in use as a numerical prefix (implying seven) and the letter *h* as both SI unit (hour) and prefix (*hecto* 10²), the same applied to “s” from previously proposed descending “septo” (i.e. SI unit *s* seconds), while *o* for *octa/octo* was problematic since a symbol *o* could be confused with zero.^{[nb 1]} The CGPM has decided to extend this *z–y* backwards through the alphabet,^{[24]} though it is not clear if the distinction of the historically related sets of letters *u/v/w* and *i/j* would be retained (it is common to avoid contrasting them in scientific series), or if letters such as *T*, which are already in use as SI units or prefixes, will again be skipped.^{[nb 2]} Several personal proposals have been made for extending the series of prefixes, with ascending terms such as *xenna, weka, vendeka* (from Greek *ennea* (9), *deka* (10), *endeka* (11)) and descending terms such as *xono*, *weco*, *vundo* (from Latin *novem/nona* (9), *decem* (10), *undecim* (11). Using Greek for ascending and Latin for descending would be consistent with established prefixes such as *deca, hecto, kilo* vs *deci, centi, milli*).^{[25]} Although some of these are repeated on the internet, none are in actual use.^{[26]}

**Hellabyte **– They say it’s “hella big”, or a “hella lot of data”….HAHA…you are now officially in data storage heaven, or hell, lolol, actually, no, go here…

Now that IS interesting!!!! WOW!!! More tomorrow class, have a good byte, I mean night!!! I can’t stand it, and can’t wait until we have our own Petabyte of storage offered on our hosting platform one day, HAHA!!!