saving . . . saved Application of array has been deleted. Application of array has been hidden .
Application of array
##### Question
What is an array ??

C-and-Cpp Arrays 07-08 min 50-60 sec 20-01-17, 6:59 a.m. thevijayaiem@gmail.com

In <a href="https://en.wikipedia.org/wiki/Computer_science" title="Computer science">computer science</a>, an array data structure, or simply an array, is a <a href="https://en.wikipedia.org/wiki/Data_structure" title="Data structure">data structure</a> consisting of a collection of elements (<a href="https://en.wikipedia.org/wiki/Value_%28computer_science%29" title="Value (computer science)">values</a> or <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Variable_%28programming%29" title="Variable (programming)">variables</a>), each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index <a href="https://en.wikipedia.org/wiki/Tuple" title="Tuple">tuple</a> by a mathematical formula.<sup class="reference" id="cite_ref-1"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-1"></a></sup><sup class="reference" id="cite_ref-andres_2-0"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-andres-2"></a></sup><sup class="reference" id="cite_ref-garcia_3-0"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-garcia-3"></a></sup> The simplest type of data structure is a linear array, also called one-dimensional array.

\r\n

For example, an array of 10 32-bit integer variables, with indices 0 through 9, may be stored as 10 <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Word_%28data_type%29" title="Word (data type)">words</a> at memory addresses 2000, 2004, 2008, ... 2036, so that the element with index i has the address 2000 + 4 \xd7 i.<sup class="reference" id="cite_ref-4"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-4"></a></sup>

\r\n

The memory address of the first element of an array is called first address or foundation address.

\r\n

Because the mathematical concept of a <a href="https://en.wikipedia.org/wiki/Matrix_%28mathematics%29" title="Matrix (mathematics)">matrix</a>\r\n can be represented as a two-dimensional grid, two-dimensional arrays \r\nare also sometimes called matrices. In some cases the term "vector" is \r\nused in computing to refer to an array, although <a href="https://en.wikipedia.org/wiki/Tuple" title="Tuple">tuples</a> rather than <a href="https://en.wikipedia.org/wiki/Vector_space" title="Vector space">vectors</a> are more correctly the mathematical equivalent. Arrays are often used to implement <a href="https://en.wikipedia.org/wiki/Table_%28information%29" title="Table (information)">tables</a>, especially <a href="https://en.wikipedia.org/wiki/Lookup_table" title="Lookup table">lookup tables</a>; the word table is sometimes used as a synonym of array.

\r\n

Arrays are among the oldest and most important data structures, and \r\nare used by almost every program. They are also used to implement many \r\nother data structures, such as <a class="mw-redirect" href="https://en.wikipedia.org/wiki/List_%28computing%29" title="List (computing)">lists</a> and <a href="https://en.wikipedia.org/wiki/String_%28computer_science%29" title="String (computer science)">strings</a>. They effectively exploit the addressing logic of computers. In most modern computers and many <a href="https://en.wikipedia.org/wiki/External_storage" title="External storage">external storage</a> devices, the memory is a one-dimensional array of words, whose indices are their addresses. <a href="https://en.wikipedia.org/wiki/Central_processing_unit" title="Central processing unit">Processors</a>, especially <a href="https://en.wikipedia.org/wiki/Vector_processor" title="Vector processor">vector processors</a>, are often optimized for array operations.

\r\n

Arrays are useful mostly because the element indices can be computed at <a href="https://en.wikipedia.org/wiki/Run_time_%28program_lifecycle_phase%29" title="Run time (program lifecycle phase)">run time</a>. Among other things, this feature allows a single iterative <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Statement_%28programming%29" title="Statement (programming)">statement</a>\r\n to process arbitrarily many elements of an array. For that reason, the \r\nelements of an array data structure are required to have the same size \r\nand should use the same data representation. The set of valid index \r\ntuples and the addresses of the elements (and hence the element \r\naddressing formula) are usually,<sup class="reference" id="cite_ref-garcia_3-1"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-garcia-3"></a></sup><sup class="reference" id="cite_ref-veldhuizen_5-0"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-veldhuizen-5"></a></sup> but not always,<sup class="reference" id="cite_ref-andres_2-1"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-andres-2"></a></sup> fixed while the array is in use.

\r\n

The term array is often used to mean <a href="https://en.wikipedia.org/wiki/Array_data_type" title="Array data type">array data type</a>, a kind of <a href="https://en.wikipedia.org/wiki/Data_type" title="Data type">data type</a> provided by most <a href="https://en.wikipedia.org/wiki/High-level_programming_language" title="High-level programming language">high-level programming languages</a>\r\n that consists of a collection of values or variables that can be \r\nselected by one or more indices computed at run-time. Array types are \r\noften implemented by array structures; however, in some languages they \r\nmay be implemented by <a href="https://en.wikipedia.org/wiki/Hash_table" title="Hash table">hash tables</a>, <a href="https://en.wikipedia.org/wiki/Linked_list" title="Linked list">linked lists</a>, <a href="https://en.wikipedia.org/wiki/Search_tree" title="Search tree">search trees</a>, or other data structures.

\r\n

The term is also used, especially in the description of <a href="https://en.wikipedia.org/wiki/Algorithm" title="Algorithm">algorithms</a>, to mean <a href="https://en.wikipedia.org/wiki/Associative_array" title="Associative array">associative array</a> or "abstract array", a <a href="https://en.wikipedia.org/wiki/Theoretical_computer_science" title="Theoretical computer science">theoretical computer science</a> model (an <a href="https://en.wikipedia.org/wiki/Abstract_data_type" title="Abstract data type">abstract data type</a> or ADT) intended to capture the essential properties of arrays.

\r\n\r\n
\r\n
\r\n<h2>Contents</h2>\r\n
<ul><li class="toclevel-1 tocsection-1"><a href="https://en.wikipedia.org/wiki/Array_data_structure#History"><span class="tocnumber">1</span> <span class="toctext">History</span></a></li><li class="toclevel-1 tocsection-2"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Applications"><span class="tocnumber">2</span> <span class="toctext">Applications</span></a></li><li class="toclevel-1 tocsection-3"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Element_identifier_and_addressing_formulas"><span class="tocnumber">3</span> <span class="toctext">Element identifier and addressing formulas</span></a>\r\n<ul><li class="toclevel-2 tocsection-4"><a href="https://en.wikipedia.org/wiki/Array_data_structure#One-dimensional_arrays"><span class="tocnumber">3.1</span> <span class="toctext">One-dimensional arrays</span></a></li><li class="toclevel-2 tocsection-5"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Multidimensional_arrays"><span class="tocnumber">3.2</span> <span class="toctext">Multidimensional arrays</span></a></li><li class="toclevel-2 tocsection-6"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Dope_vectors"><span class="tocnumber">3.3</span> <span class="toctext">Dope vectors</span></a></li><li class="toclevel-2 tocsection-7"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Compact_layouts"><span class="tocnumber">3.4</span> <span class="toctext">Compact layouts</span></a></li><li class="toclevel-2 tocsection-8"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Resizing"><span class="tocnumber">3.5</span> <span class="toctext">Resizing</span></a></li><li class="toclevel-2 tocsection-9"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Non-linear_formulas"><span class="tocnumber">3.6</span> <span class="toctext">Non-linear formulas</span></a></li></ul>\r\n</li><li class="toclevel-1 tocsection-10"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Efficiency"><span class="tocnumber">4</span> <span class="toctext">Efficiency</span></a>\r\n<ul><li class="toclevel-2 tocsection-11"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Comparison_with_other_data_structures"><span class="tocnumber">4.1</span> <span class="toctext">Comparison with other data structures</span></a></li></ul>\r\n</li><li class="toclevel-1 tocsection-12"><a href="https://en.wikipedia.org/wiki/Array_data_structure#Dimension"><span class="tocnumber">5</span> <span class="toctext">Dimension</span></a></li><li class="toclevel-1 tocsection-13"><a href="https://en.wikipedia.org/wiki/Array_data_structure#See_also"><span class="tocnumber">6</span> <span class="toctext">See also</span></a></li><li class="toclevel-1 tocsection-14"><a href="https://en.wikipedia.org/wiki/Array_data_structure#References"><span class="tocnumber">7</span> <span class="toctext">References</span></a></li></ul>\r\n

The \r\nfirst digital computers used machine-language programming to set up and \r\naccess array structures for data tables, vector and matrix computations,\r\n and for many other purposes. <a href="https://en.wikipedia.org/wiki/John_von_Neumann" title="John von Neumann">John von Neumann</a> wrote the first array-sorting program (<a href="https://en.wikipedia.org/wiki/Merge_sort" title="Merge sort">merge sort</a>) in 1945, during the building of the <a href="https://en.wikipedia.org/wiki/EDVAC" title="EDVAC">first stored-program computer</a>.<sup class="reference" id="cite_ref-6"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-6"></a></sup><sup>p. 159</sup> Array indexing was originally done by <a href="https://en.wikipedia.org/wiki/Self-modifying_code" title="Self-modifying code">self-modifying code</a>, and later using <a href="https://en.wikipedia.org/wiki/Index_register" title="Index register">index registers</a> and <a href="https://en.wikipedia.org/wiki/Addressing_mode" title="Addressing mode">indirect addressing</a>. Some mainframes designed in the 1960s, such as the <a href="https://en.wikipedia.org/wiki/Burroughs_large_systems" title="Burroughs large systems">Burroughs B5000</a> and its successors, used <a href="https://en.wikipedia.org/wiki/Memory_segmentation" title="Memory segmentation">memory segmentation</a> to perform index-bounds checking in hardware.<sup class="reference" id="cite_ref-7"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-7"></a></sup>

\r\n

Assembly languages generally have no special support for arrays, \r\nother than what the machine itself provides. The earliest high-level \r\nprogramming languages, including <a href="https://en.wikipedia.org/wiki/Fortran" title="Fortran">FORTRAN</a> (1957), <a href="https://en.wikipedia.org/wiki/Lisp_%28programming_language%29" title="Lisp (programming language)">Lisp</a> (1958), <a href="https://en.wikipedia.org/wiki/COBOL" title="COBOL">COBOL</a> (1960), and <a href="https://en.wikipedia.org/wiki/ALGOL" title="ALGOL">ALGOL 60</a> (1960), had support for multi-dimensional arrays, and so has <a href="https://en.wikipedia.org/wiki/C_%28programming_language%29" title="C (programming language)">C</a> (1972). In <a href="https://en.wikipedia.org/wiki/C%2B%2B" title="C++">C++</a> (1983), class templates exist for multi-dimensional arrays whose dimension is fixed at runtime<sup class="reference" id="cite_ref-garcia_3-2"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-garcia-3"></a></sup><sup class="reference" id="cite_ref-veldhuizen_5-1"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-veldhuizen-5"></a></sup> as well as for runtime-flexible arrays.<sup class="reference" id="cite_ref-andres_2-2"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-andres-2"></a></sup>

\r\n\r\n\r\n\r\n\r\n\r\n<table class="plainlinks metadata ambox mbox-small-left ambox-content" role="presentation"><tbody><tr><td class="mbox-image"><a class="image" href="https://en.wikipedia.org/wiki/File:Wiki_letter_w_cropped.svg"><img alt="[icon]" data-file-height="31" data-file-width="44" height="14" src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Wiki_letter_w_cropped.svg/20px-Wiki_letter_w_cropped.svg.png" width="20"></a></td><td class="mbox-text"><span class="mbox-text-span">This section needs expansion. <small>You can help by <a class="external text" href="https://en.wikipedia.org/w/index.php?title=Array_data_structure&amp;action=edit&amp;section=">adding to it</a>.</small> <small>(May 2009)</small></span></td></tr></tbody></table>\r\n<h2><span class="mw-headline" id="Applications">Applications</span></h2>

Arrays are used to implement mathematical <a href="https://en.wikipedia.org/wiki/Coordinate_vector" title="Coordinate vector">vectors</a> and <a href="https://en.wikipedia.org/wiki/Matrix_%28mathematics%29" title="Matrix (mathematics)">matrices</a>, as well as other kinds of rectangular tables. Many <a href="https://en.wikipedia.org/wiki/Database" title="Database">databases</a>, small and large, consist of (or include) one-dimensional arrays whose elements are <a href="https://en.wikipedia.org/wiki/Record_%28computer_science%29" title="Record (computer science)">records</a>.

\r\n

Arrays are used to implement other data structures, such as lists, <a href="https://en.wikipedia.org/wiki/Heap_%28data_structure%29" title="Heap (data structure)">heaps</a>, <a href="https://en.wikipedia.org/wiki/Hash_table" title="Hash table">hash tables</a>, <a href="https://en.wikipedia.org/wiki/Double-ended_queue" title="Double-ended queue">deques</a>, <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Queue_%28data_structure%29" title="Queue (data structure)">queues</a>, <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Stack_%28data_structure%29" title="Stack (data structure)">stacks</a>, <a href="https://en.wikipedia.org/wiki/String_%28computer_science%29" title="String (computer science)">strings</a>, and <a class="new" href="https://en.wikipedia.org/w/index.php?title=VList&amp;action=edit&amp;redlink=1" title="VList (page does not exist)">VLists</a>. Array-based implementations of other data structures are frequently simple and space-efficient (<a href="https://en.wikipedia.org/wiki/Implicit_data_structure" title="Implicit data structure">implicit data structures</a>), requiring little space <a href="https://en.wikipedia.org/wiki/Overhead_%28computing%29" title="Overhead (computing)">overhead</a>, but may have poor space complexity, particularly when modified, compared to tree-based data structures (compare a <a href="https://en.wikipedia.org/wiki/Sorted_array" title="Sorted array">sorted array</a> to a <a href="https://en.wikipedia.org/wiki/Search_tree" title="Search tree">search tree</a>).

\r\n

One or more large arrays are sometimes used to emulate in-program <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Dynamic_memory_allocation" title="Dynamic memory allocation">dynamic memory allocation</a>, particularly <a href="https://en.wikipedia.org/wiki/Memory_pool" title="Memory pool">memory pool</a> allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably.

\r\n

Arrays can be used to determine partial or complete <a href="https://en.wikipedia.org/wiki/Control_flow" title="Control flow">control flow</a> in programs, as a compact alternative to (otherwise repetitive) multiple <code>IF</code> statements. They are known in this context as <a href="https://en.wikipedia.org/wiki/Control_table" title="Control table">control tables</a> and are used in conjunction with a purpose built interpreter whose <a href="https://en.wikipedia.org/wiki/Control_flow" title="Control flow">control flow</a> is altered according to values contained in the array. The array may contain <a href="https://en.wikipedia.org/wiki/Subroutine" title="Subroutine">subroutine</a> <a href="https://en.wikipedia.org/wiki/Pointer_%28computer_programming%29" title="Pointer (computer programming)">pointers</a> (or relative subroutine numbers that can be acted upon by <a href="https://en.wikipedia.org/wiki/Switch_statement" title="Switch statement">SWITCH</a> statements) that direct the path of the execution.

When data objects are stored in an array, individual objects are selected by an index that is usually a non-negative <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Scalar_%28computing%29" title="Scalar (computing)">scalar</a> <a href="https://en.wikipedia.org/wiki/Integer" title="Integer">integer</a>. Indexes are also called subscripts. An index maps the array value to a stored object.

\r\n

There are three ways in which the elements of an array can be indexed:

\r\n<ul><li>0 (<a href="https://en.wikipedia.org/wiki/Zero-based_numbering" title="Zero-based numbering">zero-based indexing</a>): The first element of the array is indexed by subscript of 0.<sup class="reference" id="cite_ref-8"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-8"></a></sup></li><li>1 (one-based indexing): The first element of the array is indexed by subscript of 1.<sup class="reference" id="cite_ref-9"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-9"></a></sup></li><li>n (n-based indexing): The base index of an array can be freely chosen. Usually programming languages allowing n-based indexing also allow negative index values and other <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Scalar_%28computing%29" title="Scalar (computing)">scalar</a> data types like <a href="https://en.wikipedia.org/wiki/Enumerated_type" title="Enumerated type">enumerations</a>, or <a href="https://en.wikipedia.org/wiki/Character_%28computing%29" title="Character (computing)">characters</a> may be used as an array index.</li></ul>\r\n

Arrays can have multiple dimensions, thus it is not uncommon to \r\naccess an array using multiple indices. For example, a two-dimensional \r\narray <code>A</code> with three rows and four columns might provide access to the element at the 2nd row and 4th column by the expression <code>A[1, 3]</code> (in a <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Row_major" title="Row major">row major</a> language) or <code>A[3, 1]</code> (in a <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Column_major" title="Column major">column major</a>\r\n language) in the case of a zero-based indexing system. Thus two indices\r\n are used for a two-dimensional array, three for a three-dimensional \r\narray, and n for an n-dimensional array.

\r\n

The number of indices needed to specify an element is called the dimension, dimensionality, or <a href="https://en.wikipedia.org/wiki/Rank_%28computer_programming%29" title="Rank (computer programming)">rank</a> of the array.

\r\n

In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of some <a href="https://en.wikipedia.org/wiki/Enumerated_type" title="Enumerated type">enumerated type</a>), and the address of an element is computed by a "linear" formula on the indices.

\r\n<h3><span class="mw-headline" id="One-dimensional_arrays">One-dimensional arrays</span></h3>

A\r\n one-dimensional array (or single dimension array) is a type of linear \r\narray. Accessing its elements involves a single subscript which can \r\neither represent a row or column index.

\r\n

As an example consider the C declaration <code>int anArrayName;</code>

\r\n

Syntax : datatype anArrayname[sizeofArray];

\r\n

In the given example the array can contain 10 elements of any value available to the <code>int</code> type. In C, the array element indices are 0-9 inclusive in this case. For example, the expressions <code>anArrayName</code> and <code>anArrayName</code> are the first and last elements respectively.

\r\n

For a vector with linear addressing, the element with index i is located at the address B + c \xd7 i, where B is a fixed base address and c a fixed constant, sometimes called the address increment or stride.

\r\n

If the valid element indices begin at 0, the constant B is simply the address of the first element of the array. For this reason, the <a href="https://en.wikipedia.org/wiki/C_%28programming_language%29" title="C (programming language)">C programming language</a> specifies that array indices always begin at 0; and many programmers will call that element "<a href="https://en.wikipedia.org/wiki/Zero-based_numbering" title="Zero-based numbering">zeroth</a>" rather than "first".

\r\n

However, one can choose the index of the first element by an appropriate choice of the base address B. For example, if the array has five elements, indexed 1 through 5, and the base address B is replaced by B + 30c, then the indices of those same elements will be 31 to 35. If the numbering does not start at 0, the constant B may not be the address of any element.

\r\n<h3><span class="mw-headline" id="Multidimensional_arrays">Multidimensional arrays</span></h3>

For multi dimensional array, the element with indices i,j would have address B + c \xb7 i + d \xb7 j, where the coefficients c and d are the row and column address increments, respectively.

\r\n

More generally, in a k-dimensional array, the address of an element with indices i<sub>1</sub>, i<sub>2</sub>, ..., i<sub>k</sub> is

\r\n<dl><dd>B + c<sub>1</sub> \xb7 i<sub>1</sub> + c<sub>2</sub> \xb7 i<sub>2</sub> + ... + c<sub>k</sub> \xb7 i<sub>k</sub>.</dd></dl>\r\n

For example: int a;

\r\n

This means that array a has 2 rows and 3 columns, and the array is of\r\n integer type. Here we can store 6 elements they are stored linearly but\r\n starting from first row linear then continuing with second row. The \r\nabove array will be stored as a<sub>11</sub>, a<sub>12</sub>, a<sub>13</sub>, a<sub>21</sub>, a<sub>22</sub>, a<sub>23</sub>.

\r\n

This formula requires only k multiplications and k \r\nadditions, for any array that can fit in memory. Moreover, if any \r\ncoefficient is a fixed power of 2, the multiplication can be replaced by\r\n <a href="https://en.wikipedia.org/wiki/Bitwise_operation" title="Bitwise operation">bit shifting</a>.

\r\n

The coefficients c<sub>k</sub> must be chosen so that every valid index tuple maps to the address of a distinct element.

\r\n

If the minimum legal value for every index is 0, then B is the\r\n address of the element whose indices are all zero. As in the \r\none-dimensional case, the element indices may be changed by changing the\r\n base address B. Thus, if a two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing B by B + c<sub>1</sub> - \u2212 3 c<sub>1</sub>\r\n will cause them to be renumbered from 0 through 9 and 4 through 23, \r\nrespectively. Taking advantage of this feature, some languages (like \r\nFORTRAN 77) specify that array indices begin at 1, as in mathematical \r\ntradition while other languages (like Fortran 90, Pascal and Algol) let \r\nthe user choose the minimum value for each index.

\r\n<h3><span class="mw-headline" id="Dope_vectors">Dope vectors</span></h3>

The addressing formula is completely defined by the dimension d, the base address B, and the increments c<sub>1</sub>, c<sub>2</sub>, ..., c<sub>k</sub>. It is often useful to pack these parameters into a record called the array's descriptor or stride vector or <a href="https://en.wikipedia.org/wiki/Dope_vector" title="Dope vector">dope vector</a>.<sup class="reference" id="cite_ref-andres_2-3"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-andres-2"></a></sup><sup class="reference" id="cite_ref-garcia_3-3"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-garcia-3"></a></sup>\r\n The size of each element, and the minimum and maximum values allowed \r\nfor each index may also be included in the dope vector. The dope vector \r\nis a complete <a href="https://en.wikipedia.org/wiki/Handle_%28computing%29" title="Handle (computing)">handle</a> for the array, and is a convenient way to pass arrays as arguments to <a href="https://en.wikipedia.org/wiki/Subroutine" title="Subroutine">procedures</a>. Many useful <a href="https://en.wikipedia.org/wiki/Array_slicing" title="Array slicing">array slicing</a>\r\n operations (such as selecting a sub-array, swapping indices, or \r\nreversing the direction of the indices) can be performed very \r\nefficiently by manipulating the dope vector.<sup class="reference" id="cite_ref-andres_2-4"><a href="https://en.wikipedia.org/wiki/Array_data_structure#cite_note-andres-2"></a></sup>

\r\n<h3><span class="mw-headline" id="Compact_layouts">Compact layouts</span></h3>

Often\r\n the coefficients are chosen so that the elements occupy a contiguous \r\narea of memory. However, that is not necessary. Even if arrays are \r\nalways created with contiguous elements, some array slicing operations \r\nmay create non-contiguous sub-arrays from them.

\r\n

There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix

\r\n<dl><dd><span class="mwe-math-element"><span class="mwe-math-mathml-inline mwe-math-mathml-a11y" style="display: none;"><math xmlns:xmlns="http://www.w3.org/1998/Math/MathML">\r\n <semantics>\r\n <mrow>\r\n <mstyle displaystyle="true" scriptlevel="0">\r\n <mrow>\r\n <mi mathvariant="bold">A</mi>\r\n </mrow>\r\n <mo>=</mo>\r\n <mrow>\r\n <mrow>\r\n <mo>[</mo>\r\n <mtable columnspacing="1em" rowspacing="4pt">\r\n <mtr>\r\n <mtd>\r\n <mn>1</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>2</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>3</mn>\r\n </mtd>\r\n </mtr>\r\n <mtr>\r\n <mtd>\r\n <mn>4</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>5</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>6</mn>\r\n </mtd>\r\n </mtr>\r\n <mtr>\r\n <mtd>\r\n <mn>7</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>8</mn>\r\n </mtd>\r\n <mtd>\r\n <mn>9</mn>\r\n </mtd>\r\n </mtr>\r\n </mtable>\r\n <mo>]</mo>\r\n </mrow>\r\n </mrow>\r\n <mo>.</mo>\r\n </mstyle>\r\n </mrow>\r\n <annotation encoding="application/x-tex">{\\displaystyle \\mathbf {A} ={\\begin{bmatrix}1&2&3\\\\4&5&6\\\\7&8&9\\end{bmatrix}}.}</annotation>\r\n </semantics>\r\n[/itex]</span><img alt="\\mathbf {A} ={\\begin{bmatrix}1&amp;2&amp;3\\\\4&amp;5&amp;6\\\\7&amp;8&amp;9\\end{bmatrix}}." class="mwe-math-fallback-image-inline" src="https://wikimedia.org/api/rest_v1/media/math/render/svg/b3cd77e085d029ca078a3d2ebfed96b38d92d5a2" style="vertical-align: -4.005ex; width:17.833ex; height:9.176ex;"></span></dd></dl>\r\n

In the <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Row-major_order" title="Row-major order">row-major order</a>\r\n layout (adopted by C for statically declared arrays), the elements in \r\neach row are stored in consecutive positions and all of the elements of a\r\n row have a lower address than any of the elements of a consecutive row:

\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n<table border="1"><tbody><tr><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td></tr></tbody></table>\r\n

In <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Row-major_order" title="Row-major order">column-major order</a>\r\n (traditionally used by Fortran), the elements in each column are \r\nconsecutive in memory and all of the elements of a column have a lower \r\naddress than any of the elements of a consecutive column:

\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n<table border="1"><tbody><tr><td>1</td><td>4</td><td>7</td><td>2</td><td>5</td><td>8</td><td>3</td><td>6</td><td>9</td></tr></tbody></table>\r\n

For arrays with three or more indices, "row major order" puts in \r\nconsecutive positions any two elements whose index tuples differ only by\r\n one in the last index. "Column major order" is analogous with respect to the first index.

\r\n

In systems which use <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Processor_cache" title="Processor cache">processor cache</a> or <a href="https://en.wikipedia.org/wiki/Virtual_memory" title="Virtual memory">virtual memory</a>,\r\n scanning an array is much faster if successive elements are stored in \r\nconsecutive positions in memory, rather than sparsely scattered. Many \r\nalgorithms that use multidimensional arrays will scan them in a \r\npredictable order. A programmer (or a sophisticated compiler) may use \r\nthis information to choose between row- or column-major layout for each \r\narray. For example, when computing the product A\xb7B of two matrices, it would be best to have A stored in row-major order, and B in column-major order.

Main article: <a href="https://en.wikipedia.org/wiki/Dynamic_array" title="Dynamic array">Dynamic array</a>
\r\n

Static arrays have a size that is fixed when they are created and \r\nconsequently do not allow elements to be inserted or removed. However, \r\nby allocating a new array and copying the contents of the old array to \r\nit, it is possible to effectively implement a dynamic version of an array; see <a href="https://en.wikipedia.org/wiki/Dynamic_array" title="Dynamic array">dynamic array</a>. If this operation is done infrequently, insertions at the end of the array require only amortized constant time.

\r\n

Some array data structures do not reallocate storage, but do store a \r\ncount of the number of elements of the array in use, called the count or\r\n size. This effectively makes the array a <a href="https://en.wikipedia.org/wiki/Dynamic_array" title="Dynamic array">dynamic array</a> with a fixed maximum size or capacity; <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Pascal_string" title="Pascal string">Pascal strings</a> are examples of this.

\r\n<h3><span class="mw-headline" id="Non-linear_formulas">Non-linear formulas</span></h3>

More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional <a href="https://en.wikipedia.org/wiki/Triangular_array" title="Triangular array">triangular array</a>, for instance, the addressing formula is a polynomial of degree 2.

Both store and select take (deterministic worst case) <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Constant_time" title="Constant time">constant time</a>. Arrays take linear (<a class="mw-redirect" href="https://en.wikipedia.org/wiki/Big-O_notation" title="Big-O notation">O</a>(n)) space in the number of elements n that they hold.

\r\n

In an array with element size k and on a machine with a cache line size of B bytes, iterating through an array of n elements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory locations. This is roughly a factor of B/k better than the number of cache misses needed to access n\r\n elements at random memory locations. As a consequence, sequential \r\niteration over an array is noticeably faster in practice than iteration \r\nover many other data structures, a property called <a href="https://en.wikipedia.org/wiki/Locality_of_reference" title="Locality of reference">locality of reference</a> (this does not mean however, that using a <a href="https://en.wikipedia.org/wiki/Perfect_hash_function" title="Perfect hash function">perfect hash</a> or <a href="https://en.wikipedia.org/wiki/Hash_function#Trivial_hash_function" title="Hash function">trivial hash</a> within the same (local) array, will not be even faster - and achievable in <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Constant_time" title="Constant time">constant time</a>). Libraries provide low-level optimized facilities for copying ranges of memory (such as <a class="mw-redirect" href="https://en.wikipedia.org/wiki/String.h" title="String.h">memcpy</a>) which can be used to move <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Contiguous" title="Contiguous">contiguous</a>\r\n blocks of array elements significantly faster than can be achieved \r\nthrough individual element access. The speedup of such optimized \r\nroutines varies by array element size, architecture, and implementation.

\r\n

Memory-wise, arrays are compact data structures with no per-element <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Computational_overhead" title="Computational overhead">overhead</a>.\r\n There may be a per-array overhead, e.g. to store index bounds, but this\r\n is language-dependent. It can also happen that elements stored in an \r\narray require less memory than the same elements stored in individual variables, because several array elements can be stored in a single <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Word_%28data_type%29" title="Word (data type)">word</a>; such arrays are often called packed arrays. An extreme (but commonly used) case is the <a href="https://en.wikipedia.org/wiki/Bit_array" title="Bit array">bit array</a>, where every bit represents a single element. A single <a href="https://en.wikipedia.org/wiki/Octet_%28computing%29" title="Octet (computing)">octet</a> can thus hold up to 256 different combinations of up to 8 different conditions, in the most compact form.

\r\n

Array accesses with statically predictable access patterns are a major source of <a href="https://en.wikipedia.org/wiki/Data_parallelism" title="Data parallelism">data parallelism</a>.

08-02-17, 12:26 p.m. Mandeep12

porida

08-02-17, 12:27 p.m. Mandeep12