Digital Communications 1. Safwan El Assad. Читать онлайн. Newlib. NEWLIB.NET

Автор: Safwan El Assad
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119779773
Скачать книгу

      the relationships between the three units are:

       – natural unit: 1 nat = log2(e) = 1/loge(2) = 1.44 bits of information;

       – decimal unit: 1 dit= log2(10) = 1/log10(2) = 3.32 bits of information.

      They are pseudo-units without dimension.

      Let a stationary memoryless source S produce random independent events (symbols) s, belonging to a predetermined set [S] = [s1,s2, ... ,sN]. Each event (symbol) Si is of given probability pi, with:

images

      The source S is then characterized by the set of probabilities [P] = [p1,p2, ... ,PN]. We are now interested in the average amount of information from this source of information, that is to say, resulting from the possible set of events (symbols) that it carries out, each is taken into account with its probability of occurrence. This average amount of information from the source S is called “entropy H(S) of the source”.

      It is therefore defined by:

      [2.15] images

      2.3.2. Fundamental lemma

      Let two probability partitions on S:

images

      we have the inequality:

      [2.16] images

      Indeed, since: loge(x) ≤ x − 1, ∀x positive real, then:

images

       – Positive: since 0 ≤ pi ≤ 1; (with the agreement

       – Continuous: because it is a sum of continuous functions “log” of each pi.

       – Symmetric: relative to all the variables pi.

       – Upper bounded: entropy has a maximum value: got for a uniform law:

       – Additive: let , then

      [2.17] images

      2.3.4. Examples of entropy

      2.3.4.1. Two-event entropy (Bernoulli’s law)

images Graph depicts Entropy of a two-event source.

      Figure 2.1. Entropy of a two-event source

      The maximum of the entropy is obtained for images equal to 1 bit of information.

      2.3.4.2. Entropy of an alphabetic source with (26 + 1) characters

       – For a uniform law:⟹H = log2 (27) = 4.75 bits of information per character

       – In the French language (according to a statistical study):⟹H = 3.98 bits of information per character

      Thus, a text of 100 characters provides an information = 398 bits.

      The inequality of the probabilities makes a loss of 475 – 398 = 77 bits of information.

      The information rate of a source is defined by:

      [2.18] images

      Where: images represents the average duration of a symbol emitted by the source.

      The redundancy of a source is defined as follows:

      [2.19] images

      Between the source of information and the destination, there is the medium through which information is transmitted. This medium, including the equipment necessary for transmission, is called the transmission channel (or simply the channel).

      Let us consider a discrete stationary and memoryless channel (discrete: the alphabet of the symbols at the input and the one at the output are discrete).

Flow diagram depicts basic transmission system based on a discrete channel.

      Figure 2.2. Basic transmission system based on a discrete channel. For a color version of this figure, see www.iste.co.uk/assad/digital1.zip

      We denote:

       – [X] = [xl, x2, ... , xn]: the set of all the symbols at the input of the channel;

       – [y] = [yi, ... , ym]: the set of all the symbols at the output of the channel;

       – [P(X)] = [p(x1), p(x2), ...,p(xn)]: the vector of probability of symbols at the input of the channel;

       – [P(Y)] = [p(yi), p(y2), ... , p(ym)]: the vector of probability of symbols at the output of the channel.

      Because of the perturbations, the space [Y] can be different from the space [X], and the probabilities P(Y) can be different from the probabilities P(X).

      We define a product space [XY] and we introduce the matrix of the probabilities of the joint symbols, input-output [P(X, Y)]:

      [2.20] images

      We deduce, from this matrix of probabilities:

      [2.21] images

      [2.22] images

      We then define the following entropies:

       – the entropy of the source:[2.23]

       – the entropy of variable Y at the output of the transmission channel:[2.24]

       – the entropy of the two joint variables (X, Y)Because of the disturbances in the transmission channel, if the symbolinput-output:[2.25]

      Because of the disturbances in the transmission