9/30/2023

Technics MASH 1 Bit DAC and 5 CD Changer

Like a lot of folks today, I have been rediscovering the virtues of CD audio quality sound. Back in the 1990's, CDs were the highest quality source of of music. I have hundreds of CDs, along with several high quality Blu Ray players, but I have to use my TV to play these. The CD player has a place in many hi-fi systems today. Below is a Technics 5 CD Changer Model-SL-PD5. It features a 1-Bit DAC. The Technics MASH process converts the CD red book digital signal to a 1-Bit stream with 256 times over sampling, and then runs that through a filter and DAC. Below are photos and a discussion from Stereophile Magazine on the virtues of this amazing DAC. Later Technics models like the SL-PD5 offered optical outputs, which enable use of an external DAC. Other brands like Phillips and Sony have also introduced Bitstream 1-Bit DACs.













PDM, PWM, Delta-Sigma, 1-Bit DACs Peter W. Mitchell
Peter W. Mitchell wrote about MASH DACs in January 1990 (Vol.13 No.1):

In October 1989, Technics flew a dozen North American hi-fi writers, including myself, to Japan for a busy week including seminars about MASH 1-bit digital decoding. The "1-bit" digital decoder, is suddenly appearing everywhere. In recent years, competition among makers of CD players has taken the form of "bit wars," the use of ever-higher numbers of bits to decode the CD. Linear 16-bit decoders led to pseudo–18-bit decoding, then to real 18-bit decoders, and now several companies claim to be providing 20-bit decoding. If you don't read brochures carefully you may also come away with confused impressions about 24-, 32-, and even 45-bit processing (in digital filters).

The assumption, of course, is that more must be better. Re-sampling digital filters follow the same rule: if 2x re-sampling is good, 4x is better, and many of this year's best players use 8x. Decoder chips can be multiplied as well: early CD players used a single decoder, switched between channels. Now most players use two decoders, one per channel, while the newest high-performance models often use four D/A chips, a back-to-back pair in each channel.

It is possible to find engineering logic behind each of these design choices. The best reason for using 18- or 20-bit decoding, or back-to-back pairs of DACs, is that it can reduce the effect of decoder nonlinearity, providing more accurate decoding of the 16-bit data on the CD. Furthermore, the interpolations involved in "oversampling" digital filters have the effect of turning the original 16-bit data samples into 18-bit or longer digital words; using an 18- or 20-bit decoder reduces the distortion and noise that would be caused by rounding off the longer words or decoding only the topmost 16 bits.

Such improvements actually are realized in some high-priced players. But in midprice players the bit wars are just a marketing contest, a way to gain a competitive advantage by making specifications look better. In some factories the use of 18-bit or back-to-back DACs has become another excuse for avoiding the costly individual MSB fine-tuning that is required to obtain truly linear low-level decoding. The result, 18 months after this "CD cancer" became widely known, is that midprice CD players continue to vary greatly in linearity from sample to sample, and a 20-bit 4-DAC model of one brand may perform less well than another maker's 16-bit 2-DAC player. In this environment, the "bit" rating is little more than fraud.

"1-bit" processing is a fundamentally different approach from decoding the digital signal—a method that promises both finer performance in the very best CD players and more consistent performance in low-cost models. But at first it is sure to add confusion. If 18 bits is allegedly better than 16, how can a 1-bit decoder be considered hi-fi at all?

Two players with 1-bit decoding, the Technics SLP-555 and SLP-222, have been on the market since last spring, but the inclusion of the new decoder was kept a secret because the company wasn't ready to deal with this question. The brochures for those players incorrectly described them as having normal decoders in back-to-back pairs. This deception was intended not only to avoid causing confusion among consumers but also to prevent a rebellion among retail salespeople, who like to have a simple, persuasive description of each product they're trying to sell. In a "more bits is better" environment, 1-bit decoding would be a hard sell. Technics chose to postpone publicity about 1-bit decoding until the new year, and inviting hi-fi writers to a factory seminar was part of the plan.

The name, "1-bit" D/A conversion, is part of the problem because it engenders confusion without explaining anything. Philips's preferred name, "Bit-stream" decoding, is less confusing but still doesn't tell you very much. Fundamentally, the operation of a bit-stream decoder is not difficult to understand.

To appreciate why it's a better idea, let's begin at the beginning. Digital signal processing is inherently precise because it involves only simple on-off switching. Switches are either on or off; the accuracy of the result is not affected by the precision of the electrical parts involved, nor by the temperature, or other factors. If you have a sufficiently large number of electronic switches, operated rapidly, any desired result can be obtained. This is how computers work. And if you have too few switches for exact computation, the errors are predictable; known errors can be compensated (canceled) or can be averaged out by switching much more rapidly. (The latter is the basis of "dithering" to remove quantizing distortion in low-level signals.)

Analog processing is inherently approximate and variable, because the result depends on the physical properties of the parts used. For example, every digital device (recorder, CD player, et al) requires an output filter to reconstruct a smooth waveform and remove the ultrasonic byproducts of the digital switching process. In the early days of digital audio, those filters were complex analog circuits containing a dozen or more capacitors, inductors, and resistors. An analog filter is basically a frequency-dependent voltage divider: the signal is attenuated at each frequency according to the ratio of impedances in the circuit. Since impedances of electronic parts are specified only approximately and often vary with temperature, the response of an analog filter can be predicted only approximately. Even with selected high-precision parts it is impractical to achieve exact response, and a few years ago every digital product had a slightly different response—a built-in, nonadjustable tone control. Analog filters also exhibited a potentially audible group delay (phase shift) at high frequencies.

Then designers adopted digital filtering. A digital filter operates by combining signals after many brief time-delays (typically a few millionths of a second); in this process, unwanted signals simply cancel out. The response is controlled by the mathematical design of the filter, and by the delay times (which are precisely regulated by a crystal oscillator). Consequently manufacturers can mass-produce digital filters at very low cost, all with exactly the same response, accurate to a few thousandths of a dB. As a bonus, since the internal delays are the same for every frequency, digital filters are phase-linear.

Virtually all new CD players use digital filters, not because they contain more accurate parts, but because accurate response is inherent in their design (regardless of parts quality). Initially digital filters are more costly to design, but in mass-production they are less costly to use because they are all identical; there's no need to measure each one, grade them for accuracy, or match response in pairs.

The same reasoning underlies the development of bit-stream decoders. The problem with a conventional digital/analog converter (DAC) is that its operation involves mainly analog processes and is therefore approximate. A 16-bit DAC contains a precision current source and an array of 16 switches. Each switch is connected to a resistor, and the resistors are supposed to be scaled in exact 2:1 ratios so that each switch, when opened, will contribute exactly twice as much current to the output as the switch below it. The switches are controlled by the 16-bit codes from the CD; thus by opening and closing in various combinations, a total of 65,536 different output values can be generated.

But the topmost switch (the most-significant bit, or MSB) contributes 32,768 times as much current as the least-significant bit (LSB). If the MSB current is in error by as little as one part in 32,768, the effect of the LSB is swamped. In most CD players it is; few 16-bit DACs operate to better than 15-bit accuracy. The practical result is that most CD players are non-linear at very low signal levels, reproducing small signals at the wrong levels and with added distortion. Keep in mind that this problem arises not from the digital code itself but from small errors in an analog quantity—the current produced by the DAC for the several most-significant bits.

For comparison, imagine that you were assigned to fill a bucket with a known amount of water, using measuring cups varying in size from one ounce to 64 ounces. Even if you use care in filling the largest cup, it might contain 63.7 or 64.5 ounces instead of 64; you can't be sure that it contains exactly 64 times as much water as the smallest cup. But there is a way to obtain an exact result: use only the one-ounce cup, and transfer its contents to the bucket 64 times. The capacity of the cup may not be exactly one ounce, but as long as you fill it the same way each time, the total amount transferred will be proportional to the number of refills—an exactly linear relationship. This is the idea behind 1-bit decoding. In place of a method whose result depended on slightly uncertain analog quantities (the currents in the DAC), we have adopted a simple counting scheme—a purely digital process.

Of course with a small cup you'll have to work fast, but in modern digital electronics that's not an obstacle. In the Philips bitstream decoder, the output stage generates around ten million pulses per second, the exact rate being determined by the digital code. (This is called "pulse density modulation," or PDM.) A simple analog filter averages out the pulses to form the final analog output signal.

In all of the Japanese 1-bit decoders announced to date, the output stage is a pulse-width modulation (PWM) circuit of some type. In a PWM system the output signal is an on/off waveform in which the analog voltage is represented by the duration of the pulses, ie, the percentage of time the waveform remains in the "on" state. This is analogous to filling the bucket, not with a cup, but with a hose whose high-precision valve allows the water to flow in precisely timed bursts. When we want a larger amount of water, we use wider pulses (longer bursts).

The Technics MASH (multistage) decoder uses pulses of 11 different durations to form the output signal. The timing circuit that controls the pulses operates at a frequency of 33.9MHz, or 768 times higher than the 44.1kHz sampling rate of the digital codes in the CD. The transformation of the CD's original PCM signal into the final PWM waveform is determined mathematically and is accomplished entirely in the digital domain. In principle this can be done to any desired degree of accuracy, preserving all of the information in the original 16-bit code.

Summing up: to obtain exact frequency and phase response, manufacturers abandoned analog filters whose performance depended on inexact circuit impedances, and adopted digital filters whose response is controlled by mathematical operations and precisely timed delays. Now, to obtain consistently exact decoding of low-level signals, they intend to abandon conventional DACs whose accuracy is affected by uncertain analog quantities (currents flowing through resistors of slightly inexact value), and replace them with bitstream decoders whose accuracy, again, is determined by mathematics and timing (the number and duration of pulses).

The essential point is that the performance of a bitstream decoder, like that of a digital filter, depends on its design and is not expected to vary from sample to sample. Unlike PCM decoders, there is no need to quality-grade the chips for accuracy, nor to fine-tune the performance on the production line. Thus the bitstream decoder brings closer the day when CD players, too, can be assembled by robots with no need for individual adjustment or testing.

Conventional current-summing DACs also require a current/voltage conversion stage, which can be a source of slewing-induced distortion, plus a deglitching circuit to suppress the "glitch" (the high-current spike) that occurs when several bits change in imperfect synchrony. A bitstream decoder needs neither.

Stereophile readers have already seen an example of how good 1-bit decoding can be, in Larry Greenhill's review of Sansui's AU-X911DG integrated amplifier (November 1989, pp.144–150). The amplifier's integral D/A converter, called "LDCS" by Sansui, is actually a third-generation Technics MASH chip. LG loved its sound, while Robert Harley measured its linearity as "exceptionally accurate, among the best I have measured...nearly a perfect straight line."

You might reasonably suppose that, while introducing a significant technological advance, manufacturers would present a united front in communicating the benefits of the new approach to consumers. No such luck. A forthright presentation of the advantages of 1-bit decoding would require admitting how variable the performance of previous and current players has been. Besides, manufacturers like to promote the alleged uniqueness of their designs: they are launching 1-bit technology with a dizzying array of jargon aimed at making each version seem unique.

Philips, the first to go public with the new system, calls its version a Bitstream decoder process and uses a pulse density modulation (PDM) output circuit. Technics, which claims to have been working on 1-bit decoding since 1986 but is only going public with it now, calls its process MASH and uses a pulse-width modulation (PWM) output circuit. Harman/Kardon is using the Technics MASH decoder in two new CD players but confused many observers by calling it a "bitstream" decoder and comparing its performance to the Philips circuit. Sansui, as noted earlier, uses the Technics MASH chip in its Vintage series CD player and integrated amplifier, but calls it "LDCS." Sony appears to be using the Philips PDM circuit in several CD players marketed overseas (but not yet in the US), calling it a "High Density Linear Converter."

All of the new 1-bit decoders contain a "noise-shaping" digital filter that suppresses hiss, enhancing the S/N ratio, hence the resolution. Technics' trade name for its decoder is a quasi-acronym for this filter: MultistAge noise SHaping (MASH). The MASH chip that has been available since last spring is a third-generation design with a claimed S/N ratio of 108dB. Sony recently announced a new decoder using Sony Extended Noise Shaping (SENS) to achieve a claimed S/N ratio of 118dB. Not to be outdone, JVC announced a chip that uses PEM (pulse-edge modulation, a sort of one-sided PWM) and VANS (Victor Advanced Noise Shaping) to achieve 120dB. At its seminar for North American hi-fi writers, Technics capped this game of corporate one-upmanship by announcing that its third-generation chip will be used only in midprice players; the company's best players will contain a new fourth-generation MASH chip rated at 123dB.

Note that these specifications apply only to noise generated in the playback process; since virtually no CD has been recorded with a S/N ratio better than 90dB, these claims won't be realized with real recordings. (The measurement is made using a special test CD recorded with an all-zeroes code, with no dithering.)

But to demonstrate the superb linearity of the fourth-generation MASH decoder, Technics conducted a play-off comparing its newest player with current Denon and Sony models using 18- and 20-bit DACs. It was no contest; in the dithered glide tone from –60 to –120dB on the CBS test disc, the Sony produced audible distortion and the Denon generated obvious noise modulation due to nonlinearities in the DACs. (To be fair, these may have been worse-than-average samples off the production line.) The playback of this track by the Technics was the best I've ever heard, with no audible imperfection.

What appeals most to my Yankee soul is that this performance came from a decoder that is actually less costly to produce than a conventional DAC. MASH chips, or the equivalent from other manufacturers, can be used in CD players at virtually every price level. (A low-power version for portables hasn't been developed yet, but will be.) Within a couple of years, 1-bit decoders could be in every new CD player; then the cancer of nonlinear decoding will have been banished.

I don't want to leave the impression that all 1-bit decoders are alike in their performance or sound. There have been many rumors that the original Philips Bitstream decoder was not designed to leapfrog ahead of the best conventional DAC performance, but is just a way of obtaining consistent linearity in low-cost players. Further rumors suggest that Philips is working on a high-performance Bitstream decoder for introduction next year.

But the picture became confused at the British Penta hi-fi show in September, where an A/B comparison carried out by reviewer Paul Miller apparently persuaded many listeners that the present Philips Bitstream decoder sounds better than the best 18- and 20-bit conventional DACs. A friend of mine who heard the Penta demonstration examined the demonstration setup afterward; evidently the CD players were not accurately matched in level, and the comparison may have been invalid. Martin Colloms, writing in HFN/RR, added that in his own listening tests the present Philips circuit is a good mid-level performer but not equal to the best linear DACs.

Two weeks after my visit to Japan, the potential of 1-bit decoding was confirmed in a paper written by British mathematician Michael Gerzon for the New York convention of the Audio Engineering Society. In Gerzon's absence it was introduced and summarized by Stanley Lipshitz, who called it a very important paper (footnote 11). It is a mathematical analysis of the noise-shaping that is a central part of MASH and other 1-bit decoders, showing that with appropriate selection of the noise-shaping filter function, the effective dynamic range of CD playback can be increased by about 11dB, or nearly two bits' worth.

The actual limitation now lies at the recording end of the signal chain, with the nonlinearities and quantizing distortion in the A/D converters used in professional digital recorders. Gerzon's paper shows, and the Technics demonstration confirms, that if the recorded signal is correctly dithered to eliminate quantizing distortion, it is possible to record—and accurately resolve in playback—signals much smaller than the least-significant bit. (In theory this is also true with a conventional DAC, but only if it is precisely adjusted for good linearity, which real DACs usually aren't.) So while the CD is only a 16-bit storage medium, it is capable of 18-bit effective resolution and dynamic range. At the AES convention a designer of high-performance oversampling A/D converters told me that Sony will soon introduce a successor to its PCM-1630 CD mastering recorder, employing those A/D converters. Then the recent improvements in player design will really pay off.—Peter W. Mitchell


Footnote 11: "Optimal Noise Shaping and Dither of Digital Signals," Michael Gerzon and Peter G. Craven, AES Preprint 2822. Preprints are available from the Audio Engineering Society, 60 East 42nd Street, New York, NY 10165. Web: www.aes.org.

Source: https://www.stereophile.com/content/pdm-pwm-delta-sigma-1-bit-dacs-peter-w-mitchell

Technics SL-MC4 60 CD Changer (from Amazon.com)
60+1 CD changer, digital optical output, CD text search and scrolling text display. Text edit function, phone-style 10 key enter pad, Quick disc change mechanism. Front loading mechanism allows to play one disc while changing another. Quick single play system, 14 preset grouping files.

Large-capacity CD changers are among the best bargains in today's audio market, and Technics is one of a handful of companies responsible for bringing them to a broad consumer base. The LS-MC4 61-disc changer/player is a well-crafted component that fits neatly into an entertainment rack while offering just enough storage capacity to keep most music lovers content.

This handsome player defies the "jukebox" description of many changers, measuring as it does less than seven inches high (with a standard width). The entire front-panel lifts down manually to reveal all 61 slots, with slot 1 reserved for single-disc play only. We were impressed with the build quality of the door mechanism, which slides down gently but firmly and doesn't appear prone to breakage. This mega-changer includes an optical-digital output for connecting to an outboard digital-to-analog converter or an surround receiver or processor with digital inputs.

We connected the LS-MC4 to an outboard digital-to-analog converter with a Toslink optical cable, plugged it in, slipped a CD in the single-disc slot, hit play, and whistled the tune of simplicity.

Since programming features can be rather complicated with today's computer-reliant changers, operating instructions are a must-read. Technics deserves credit for providing well-written, concise instructions on the multitude of programming options, including how to categorize discs by music genre (choose from 14, from Ballads to Oldies) and how to input customized text to identify discs (though a growing number of discs offer CD Text, which displays track and artist information automatically).

It took approximately 90 minutes to read the instructions and become comfortable with inputting text using both the remote control and the front-panel numeric keypads, which include letters just like a phone. It took a few trial-runs to get the procedure down, which was encumbered by the computer's 7-second limit to perform text entries. Once we got the hang of it, however, we had the procedure memorized after about half-a-dozen discs.

Obviously, programming 60 CDs is cumbersome and requires an afternoon of leisure time, but it's well-worth the effort, since it eliminates the task of searching for the right CD in a five-foot display rack or, worse, shuffling through the changer in search of a specific title. Once this mega-changer is armed and loaded, it brings added pleasure to general music listening, not to mention parties.

The LS-MC4 should top of any host's list of must-have electronics, since it can play a weekend worth of music with the touch of a button. Although sound quality doesn't seem to be a priority in mega-CD changers, the LS-MC4 is more than adequate for most music lovers, particularly when taking advantage of the fiber-optic audio output. Kudos to Technics for simplifying today's large-capacity CD changers with the LS-MC4.












9/29/2023

The Loudness War - wiki

The loudness war (or loudness race) is a trend of increasing audio levels in recorded music, which reduces audio fidelity and—according to many critics—listener enjoyment. Increasing loudness was first reported as early as the 1940s, with respect to mastering practices for 7-inch singles.[1] The maximum peak level of analog recordings such as these is limited by varying specifications of electronic equipment along the chain from source to listener, including vinyl and Compact Cassette players. The issue garnered renewed attention starting in the 1990s with the introduction of digital signal processing capable of producing further loudness increases.

With the advent of the compact disc (CD), music is encoded to a digital format with a clearly defined maximum peak amplitude. Once the maximum amplitude of a CD is reached, loudness can be increased still further through signal processing techniques such as dynamic range compression and equalization. Engineers can apply an increasingly high ratio of compression to a recording until it more frequently peaks at the maximum amplitude. In extreme cases, efforts to increase loudness can result in clipping and other audible distortion.[2] Modern recordings that use extreme dynamic range compression and other measures to increase loudness therefore can sacrifice sound quality to loudness. The competitive escalation of loudness has led music fans and members of the musical press to refer to the affected albums as "victims of the loudness war".
History[edit]

The practice of focusing on loudness in audio mastering can be traced back to the introduction of the compact disc,[3] but also existed to some extent when the vinyl phonograph record was the primary released recording medium and when 7-inch singles were played on jukebox machines in clubs and bars. The so-called wall of sound (not to be confused with the Phil Spector Wall of Sound) formula preceded the loudness war, but achieved its goal using a variety of techniques, such as instrument doubling and reverberation, as well as compression.[4]

Jukeboxes became popular in the 1940s and were often set to a predetermined level by the owner, so any record that was mastered louder than the others would stand out. Similarly, starting in the 1950s, producers would request louder 7-inch singles so that songs would stand out when auditioned by program directors for radio stations.[1] In particular, many Motown records pushed the limits of how loud records could be made; according to one of their engineers, they were "notorious for cutting some of the hottest 45s in the industry."[5] In the 1960s and 1970s, compilation albums of hits by multiple different artists became popular, and if artists and producers found their song was quieter than others on the compilation, they would insist that their song be remastered to be competitive.

Because of the limitations of the vinyl format, the ability to manipulate loudness was also limited. Attempts to achieve extreme loudness could render the medium unplayable. Digital media such as CDs remove these restrictions and as a result, increasing loudness levels have been a more severe issue in the CD era.[6] Modern computer-based digital audio effects processing allows mastering engineers to have greater direct control over the loudness of a song: for example, a brick-wall limiter can look ahead at an upcoming signal to limit its level.[7]Three different releases of ZZ Top's song "Sharp Dressed Man" show increasing loudness over time: 1983–2000–2008.[8]

The stages of CD loudness increase are often split over the decades of the medium's existence.
1980s[edit]

Since CDs were not the primary medium for popular music until the late 1980s, there was little motivation for competitive loudness practices then. The common practice of mastering music for CD involved matching the highest peak of a recording at, or close to, digital full scale, and referring to digital levels along the lines of more familiar analog VU meters. When using VU meters, a certain point (usually −14 dB below the disc's maximum amplitude) was used in the same way as the saturation point (signified as 0 dB) of analog recording, with several dB of the CD's recording level reserved for amplitude exceeding the saturation point (often referred to as the "red zone", signified by a red bar in the meter display), because digital media cannot exceed 0 decibels relative to full scale (dBFS).[citation needed] The average RMS level of the average rock song during most of the decade was around −16.8 dBFS.[9]: 246 
1990s[edit]

By the early 1990s, mastering engineers had learned how to optimize for the CD medium and the loudness war had not yet begun in earnest.[10] However, in the early 1990s, CDs with louder music levels began to surface, and CD levels became more and more likely to bump up to the digital limit,[note 1] resulting in recordings where the peaks on an average rock or beat-heavy pop CD hovered near 0 dBFS,[note 2] but only occasionally reached it.[citation needed]

The concept of making music releases "hotter" began to appeal to people within the industry, in part because of how noticeably louder some releases had become and also in part because the industry believed that customers preferred louder-sounding CDs, even though that may not have been true.[11] Engineers, musicians, and labels each developed their own ideas of how CDs could be made louder.[12] In 1994, the first digital brick-wall limiter with look-ahead (the Waves L1) was mass-produced; this feature, since then, has been commonly incorporated in digital mastering limiters and maximizers.[note 3] While the increase in CD loudness was gradual throughout the 1990s, some opted to push the format to the limit, such as on Oasis's widely popular album (What's the Story) Morning Glory?, whose RMS level averaged −8 dBFS on many of its tracks—a rare occurrence, especially in the year it was released (1995).[10] Red Hot Chili Peppers's Californication (1999) represented another milestone, with prominent clipping occurring throughout the album.[12]
2000s[edit]Waveform envelopes comparison showing how the CD release of Death Magnetic (top) employed heavy compression resulting in higher average levels than the Guitar Hero downloadable version (bottom)

By the early 2000s, the loudness war had become fairly widespread, especially with some remastered re-releases and greatest hits collections of older music. In 2008, loud mastering practices received mainstream media attention with the release of Metallica's Death Magnetic album. The CD version of the album has a high average loudness that pushes peaks beyond the point of digital clipping, causing distortion. This was reported by customers and music industry professionals, and covered in multiple international publications, including Rolling Stone,[13] The Wall Street Journal,[14] BBC Radio,[15] Wired,[16] and The Guardian.[17] Ted Jensen, a mastering engineer involved in the Death Magnetic recordings, criticized the approach employed during the production process.[18] When a version of the album without dynamic range compression was included in the downloadable content for the video game Guitar Hero III, copies of this version were actively sought out by those who had already purchased the official CD release. The Guitar Hero version of the album songs exhibit much higher dynamic range and less clipping than those on the CD release, as can be seen from the illustration.[19]

In late 2008, mastering engineer Bob Ludwig offered three versions of the Guns N' Roses album Chinese Democracy for approval to co-producers Axl Rose and Caram Costanzo. They selected the one with the least compression. Ludwig wrote, "I was floored when I heard they decided to go with my full dynamics version and the loudness-for-loudness-sake versions be damned." Ludwig said the "fan and press backlash against the recent heavily compressed recordings finally set the context for someone to take a stand and return to putting music and dynamics above sheer level."[20]
2010s[edit]

In March 2010, mastering engineer Ian Shepherd organised the first Dynamic Range Day,[21] a day of online activity intended to raise awareness of the issue and promote the idea that "Dynamic music sounds better". The day was a success and its follow-ups in the following years have built on this, gaining industry support from companies like SSL, Bowers & Wilkins, TC Electronic and Shure as well as engineers like Bob Ludwig, Guy Massey and Steve Lillywhite.[22] Shepherd cites research showing there is no connection between sales and loudness, and that people prefer more dynamic music.[4][23] He also argues that file-based loudness normalization will eventually render the war irrelevant.[24]

One of the biggest albums of 2013 was Daft Punk's Random Access Memories, with many reviews commenting on the album's great sound.[25][26] Mixing engineer Mick Guzauski deliberately chose to use less compression on the project, commenting "We never tried to make it loud and I think it sounds better for it."[27] In January 2014, the album won five Grammy Awards, including Best Engineered Album (Non-Classical).[28]

Analysis in the early 2010s suggests that the loudness trend may have peaked around 2005 and subsequently reduced, with a pronounced increase in dynamic range (both overall and minimum) for albums since 2005.[29]

Mastering engineer Bob Katz had argued that "The last battle of the loudness war has been won", claiming that mandatory use of Sound Check by Apple would lead to producers and mastering engineers to turn down the level of their songs to the standard level, or Apple will do it for them. He believed this would eventually result in producers and engineers making more dynamic masters to take account of this factor.[30][31][32]

Earache Records reissued much of its catalog as part of its "Full Dynamic Range" series, intended to counteract the loudness war and ensure that fans hear the music as it was intended.[33]
2020s[edit]

By the late 2010s/early 2020s, most major U.S. streaming services began normalizing audio by default.[34] Target loudness for normalization varies by platform:
Audio normalization per streaming serviceServiceLoudness (measured in LUFS)
Amazon Music −13 LUFS[35]
Apple Music −16 LUFS[35]
SoundCloud −14 LUFS[35]
Spotify −14 LUFS, −11 and −19 available in premium[36][37]
Tidal −14 (default) or −18 LUFS[38][35]
YouTube −14 LUFS[39]


Measured LUFS may further vary among streaming services due to differing measurement systems and adjustment algorithms. For example, Amazon, Tidal, and YouTube do not increase the volume of tracks.[35]

Some services do not normalize audio, for example Bandcamp.[35]
Radio broadcasting[edit]

When music is broadcast over radio, the station applies its own signal processing, further reducing the dynamic range of the material to closely match levels of absolute amplitude, regardless of the original recording's loudness.[40]

Competition for listeners between radio stations has contributed to a loudness war in radio broadcasting.[41] Loudness jumps between television broadcast channels and between programmes within the same channel, and between programs and intervening adverts are a frequent source of audience complaints.[42] The European Broadcasting Union has addressed this issue in the EBU PLOUD Group with publication of the EBU R 128 recommendation. In the U.S., legislators passed the CALM act, which led to enforcement of the formerly voluntary ATSC A/85 standard for loudness management.
Criticism[edit]

In 2007, Suhas Sreedhar published an article about the loudness war in the engineering magazine IEEE Spectrum. Sreedhar said that the greater possible dynamic range of CDs was being set aside in favor of maximizing loudness using digital technology. Sreedhar said that the over-compressed modern music was fatiguing, that it did not allow the music to "breathe".[43]

The production practices associated with the loudness war have been condemned by recording industry professionals including Alan Parsons and Geoff Emerick,[44] along with mastering engineers Doug Sax, Stephen Marcussen, and Bob Katz.[5] Musician Bob Dylan has also condemned the practice, saying, "You listen to these modern records, they're atrocious, they have sound all over them. There's no definition of nothing, no vocal, no nothing, just like—static."[45][46] Music critics have complained about excessive compression. The Rick Rubin–produced albums Californication and Death Magnetic have been criticised for loudness by The Guardian; the latter was also criticised by Audioholics.[47][48] Stylus Magazine said the former suffered from so much digital clipping that "even non-audiophile consumers complained about it".[10]

Opponents have called for immediate changes in the music industry regarding the level of loudness.[46] In August 2006, the vice-president of A&R for One Haven Music, a Sony Music company, in an open letter decrying the loudness war, claimed that mastering engineers are being forced against their will or are preemptively making releases louder to get the attention of industry heads.[6] Some bands are being petitioned by the public to re-release their music with less distortion.[44]

The nonprofit organization Turn Me Up! was created by Charles Dye, John Ralston, and Allen Wagner in 2007 with the aim of certifying albums that contain a suitable level of dynamic range[49] and encourage the sale of quieter records by placing a "Turn Me Up!" sticker on certified albums.[50] As of 2019, the group has not produced an objective method for determining what will be certified.[51]

A hearing researcher at House Ear Institute is concerned that the loudness of new albums could possibly harm listeners' hearing, particularly that of children.[50] The Journal of General Internal Medicine has published a paper suggesting increasing loudness may be a risk factor in hearing loss.[52][53]


A two-minute YouTube video addressing this issue by audio engineer Matt Mayfield[54] has been referenced by The Wall Street Journal[55] and the Chicago Tribune.[56] Pro Sound Web quoted Mayfield, "When there is no quiet, there can be no loud."[57]

The book Perfecting Sound Forever: An Aural History of Recorded Music, by Greg Milner, presents the loudness war in radio and music production as a central theme.[12] The book Mastering Audio: The Art and the Science, by Bob Katz, includes chapters about the origins of the loudness war and another suggesting methods of combating the war.[9]: 241  These chapters are based on Katz's presentation at the 107th Audio Engineering Society Convention (1999) and subsequent Audio Engineering Society Journal publication (2000).[58]
Debate[edit]


In September 2011, Emmanuel Deruty wrote in Sound on Sound, a recording industry magazine, that the loudness war has not led to a decrease in dynamic variability in modern music, possibly because the original digitally recorded source material of modern recordings is more dynamic than analogue material. Deruty and Tardieu analyzed the loudness range (LRA) over a 45-year span of recordings and observed that the crest factor of recorded music diminished significantly between 1985 and 2010, but the LRA remained relatively constant.[29] Deruty and Damien Tardieu criticized Sreedhar's methods in an AES paper, saying that Sreedhar had confused crest factor (peak to RMS) with dynamics in the musical sense (pianissimo to fortissimo).[59]

This analysis was also challenged by Ian Shepherd and Bob Katz on the basis that the LRA was designed for assessing loudness variation within a track while the EBU R128 peak to loudness ratio (PLR) is a measure of the peak level of a track relative to a reference loudness level and is a more helpful metric than LRA in assessing overall perceived dynamic range. PLR measurements show a trend of reduced dynamic range throughout the 1990s.[60][61]

Debate continues regarding which measurement methods are most appropriate to evaluating the loudness war.[62][63][64]


Source: https://en.wikipedia.org/wiki/Loudness_war
Examples of "loud" albums[edit]

Albums that have been criticized for their sound quality include:
ArtistAlbumRelease dateArctic Monkeys Whatever People Say I Am, That's What I'm Not[10] 23 January 2006
Black Sabbath 13[65] 10 June 2013
Christina Aguilera Back to Basics[6] 9 August 2006
The Cure 4:13 Dream[66] 27 October 2008
Depeche Mode Playing the Angel[67][68][note 4] 14 October 2005
Duran Duran Duran Duran (2010 remaster)[69] 29 March 2010
Seven and the Ragged Tiger (2010 remaster)[69]
The Flaming Lips At War with the Mystics[10][note 5] 3 April 2006
Led Zeppelin Mothership[70][note 6] 12 November 2007
Lily Allen Alright, Still[70] 13 July 2006
Los Lonely Boys Sacred[6] 18 July 2006
Metallica Death Magnetic[71][72][note 7][48] 12 September 2008
Miranda Lambert Revolution[73] 29 September 2009
Oasis (What's the Story) Morning Glory?[10] 2 October 1995
Paul McCartney Memory Almost Full[74] 4 June 2007
Paul Simon Surprise[75] 9 May 2006
Queens of the Stone Age Songs for the Deaf[10] 27 August 2002
Red Hot Chili Peppers Californication[48][10] 8 June 1999
Rush Vapor Trails[75][note 8] 14 May 2002
The Stooges Raw Power (1997 remix & remaster)[75] 22 April 1997
Taylor Swift 1989[76] 27 October 2014