Yup. I worked on the "Rapport" series of switches at Bell Canada. It was DS1 (Digital Signal 1) out one end and a rack full of Zyxel modems on the other side. The idea was RBOCs (Regional Bell Operating Companies) would put these in their CO (Central Office) and terminate 56k modem signals over the analog "last mile" loop to the customer premises and then do Frame Relay over the phone company's data lines to your ISP.
I know Southwest Bell bought a number of them and stuffed them in a closet north of downtown Dallas. During the install I remember having to explain what Ethernet was to their techs. They were EXCELLENT at phone standards, but had decided the data world was threatening and were determined to never learn anything about it.
I know that between around '93 and '97 if you dialed AOL from D/FW there was a good chance your call would be terminated somewhere within a mile or two of your house and the bits flowing between your Compaq Presario and AOL would be sent digitally from the local CO to AOL's data center in Sterling, VA.
This line of business was (of course) destroyed by consumer DSL and cable modems, but for about 5 years it was fairly popular with the phone companies. ISDN at the time was a bit pricey for most households and a modem is a one-off purchase. Most people I knew using things like AOL or CompuServe were using a hand-me-down 36k modem on a crappy 33MHz 486sx running DOS / Win3.1 / Win95 and were fairly cost-sensitive.
Revealing my ignorance here, but was (is?) there a telephone equivalent of anycast such that, say, the 1-800-... Or 1-900... numbers would be routed differently based on location? My basic knowledge of phone systems suggests it would at least be possible.
Was there any way for the customer to somehow interact with SS7 frames?
We’ve had techs come to our home in Canada in the 1990s, and I remember being fascinated with their mystical toolbox phone that seemed to uncover hidden phone line functionality. Almost like the whip in Indiana Jones.
Yes, but it would usually be based on the first 3 or 6 numbers of your phone number. The first 3 are of course “area code” and the 6 are called “NPA-NXX”. This has blurred some due to line number portability and cell phones.
Absolutely, when call hits local switch it can be terminated differently based on its location. Particularly pertinent for modem based services in the 90s. A single national dialup number would terminate on 100s of local pops and routing decisions would be done to keep traffic as local as possible.
Internet Thruway from Nortel allowed multiple ISPs to use the same local termination hardware; If I remember the details correctly, different national numbers could be terminated on the same box, with the subsequent IP traffic routed to the correct ISP.
Yes, at the very least 911 (US) / 112 (EU) run on that system. For SIP numbers routing depends on the address you set up in the phone provider's portal, so if you're using bring-your-own-SIP to provide landline phone service in your house, you absolutely have to keep the address current or you risk dialing 911 and ending up on the dispatch of your old addres...
Not in the US, but we had a country-wide number that each local station simply routed to a local extension. Sometimes, even multi-line wasn’t available at some stations, so it was just a sequential list of extensions to try. Looks like a hack, but it worked flawlessly.
> DS1 (Digital Signal 1) out one end and a rack full of Zyxel modems on the other side
Why use real physical modems when you already have subscribers signals converted to convenient digital form in DS1 bundle? Wouldnt it make more sense to put a box with one fat DSP doing 24 modems all in bulk inside a box with DS1 and Ethernet sockets at the ISP location instead?
These were great boxes, and the only way you could get 56K was to call into an ISP with one of these or similar on their end -- the trickery that allowed 56K relied on one end being fully digital.
I was working for an ISP around that time and we had a bunch of Portmaster 2s connected via RS-232 cables to piles of modems, some rackmount some just stacks and stacks of US Robotics Sportsters. Sometimes modems would get wedged and we'd have to reboot them or "busy out" the line that they were on. Harder for the modems that were an hour away.
When the transition happened we were able to get rid of all those wires and just plug in one small phone cable for the T1, another for Ethernet, and terminate 23 lines. The Portmaster would treat all the modems as a pool and route calls to whichever was available, and once a call was done would run some testing on the modem before putting it back into the pool. It was like a space age rocket ship! At one point I was driving around with $50K worth of Portmasters in the trunk of my car, hoping I didn't get rear-ended. They were not at all cheap, but they were worth it.
Similar story with Portmaster 2s and a wall of modems layed out. The resulting blinking lights acted like a load monitor of sorts as the activity would spread across the wall as customers dialed in after work and signed off at night. Not mention a wall of flashing red lights made a pretty good picture of ‘the internet’ for those just starting out on this adventure in 1997.
Oh man. I was talking to my wife’s acquaintance and he was excited to talk shop when he found out I was technical. He worked at “this small company you’ve never heard of, Livingston.” “As in, Portmaster?” “You know about that?!”
Yeah, friend. I’m very familiar, and it was amazing tech. It sure kept the data center cooling system busy, though.
This did happen eventually. In the late 90's, various companies (Cisco, Ascend) provided boxes that could handle 24 modems on a single T1 port (PRI or channelized T1.) This massively improved ISP port density. Before that, it was racks and racks of modems...
I was working for one of the telco equipment firms around that time. We made a box that would terminate TDM trunks and had it in the lab.
I was installing Windows 2000 on a PC in the lab (manual disk swapping required) when, hidden behind another rack, several shelves full of physical modems all started calling the box at once, speakers on.
Isn't DSL basically the same thing? If you live in a jurisdiction with unbundled local loops (which I suspect you do not), your phone company terminates your DSL at their nearest "office" (often a roadside equipment cabinet nowadays) and then does Metro Ethernet over the phone company's data lines to your ISP.
If you don't live in that jurisdiction, it's the same but your ISP and your phone company are the same company.
To be clear, this was well known at the time. It was advertised that 56k was for download only and required an ISP that supported it. For those living in rural areas, there often weren’t any local options (long distance was certainly not free in those days). But for those who could get it, it was definitely a big improvement.
TIL why upload on 56k modems were capped on 33.6k. I always wondered about that.
Super interesting stuff!
I also remember back in the day that my 56k modem would often only connect at like 48k or so, especially when it was raining. I guess living far out from the city made the connection more noisy?
I'm still mad that they high-quality voice audio possible through ISDN didn't become ubiquitous. It's ridiculous to hear the clipped frequency range of a plain old telephone line during radio interviews in 2025.
I'm even more mad that sound quality in radio interviews actually decreased. People used to use landlines for calling, given that very recognizable sound "quality" (which is of course nowhere close to what ISDN could do). Now as (almost) everybody uses cell phones, quality is sometimes good, sometimes very poor depending on network conditions. It's not rare to hear the consequences of dropped packets. Sometimes it also sounds bad with apparently no drops, not sure why: maybe are people using the phone in "hands-free" mode so they don't need to keep it in their hands?
I think it was ubiquitous. NPR in the 90s and 2000s used ISDN to allow many of their commentators to work from home. I think where you hear those crazy 8KHz clipped calls is where it's not an option. Mostly these days remote radio and podcast interviews I hear remote participants sound more like they're on mobile phones: variable voice quality with an almost unbearable latency.
They really do sound alot better. It always reminds me of the first time I ever made a FaceTime call, in 2010, and the high quality audio was just as interesting as the video.
It is the reason why we switched to using FaceTime audio. The sound quality is so much better than over the normal voice line. I don’t know how to reliably get HD Voice.
If course it has to be pre-planned, someone needs to have the hardware with them. So sometimes there's a spontaneous connection over normal mobile phone. That's something that everyone has with them at all times.
I had ISDN for around $75/mo in '94 or '95, flat rate for "local" calls. It was fantastic! But then the phone company switched it to a metered service (in Omaha Nebraska, USA) and the price would have been over $300/mo after that change. I was grandfathered in, as long as I didn't move.
I was talking to a sales rep, at the time I worked for USWest or QWest, whatever they were called at that time, which may have helped, and the sales rep told me "we are being told to actively discourage people from buying ISDN".
I get the impression that the phone company hated consumer modem use of any kind, because it tied up CO equipment 24x7, and they liked the returns on investment they got with people paying $25/mo for resources that were used an hour or less a day, sometimes with extra revenue from long distance calling. And ISDN was just another representation of that.
24 [bit] * 192 [kHz] = 4.608 [Mbps]. Maybe not sensible to do so, but many people could have Discord calls in uncompressed "high-res" WAV. It's crazy that there aren't even 16bit/44.1k modes in most voice call apps.
That's an absurd bit rate for humans, especially talking about voice.
24-bit samples is ridiculous overkill. That's a huge dynamic range that's completely unnecessary.
At 192KHz you'd be able to capture 96KHz signals, far, FAR outside the range of human hearing. Human hearing peaks at 22KHz so you only need a sample rate of 44KHz to capture the total range of human hearing.
For human voice you don't really need better than 16-bit samples at 12KHz or so. That's for great quality voice.
The only reason audio mastering is done at huge sample sizes and sampling frequencies is to prevent aliasing during mixing and to preserve higher frequency harmonics. There's absolutely no need for such rates delivering to human beings.
Also higher fidelity audio sampling is available for phone calls. The issue is more political than technical. Cellular carriers don't like to negotiate higher quality calls between one another so inter-carrier calls tend to fall back on the lowest common denominator AMR-NB codec. Intra-carrier calls don't even reliably pick AMR-WB let alone EVS available with VoLTE.
There is zero point in sampling higher than 48khz. That captures all frequencies up to 20khz or so. 192khz is just a waste of bandwidth for no actual gain.
nyquist-shannon means that you only need to sample at 192khz if you need to encode signals up to 96khz.
humans can’t hear above 20khz. adult humans can’t hear above 16khz or so, we lose the top end before age 20. this means that the standard 48khz sampling rate covers the entire human hearing range and then some (0-24khz). any sampling rate over 48khz for sound intended for human hearing is a total waste.
Why do 96 or 128khz sampled audio files sound better than 48khz ones? I blind tested and could always tell the difference between them, but not between 128 and 192
Typically, high sampling rate files are part of a different mastering process than what is published as a 44.1kHz cd audio or 48kHz dvd audio.
Also, you might possibly be sensitive to resampling artifacts if your output device runs at 44.1kHz and your file is 48kHz or vice versa.
Audio testing is hard, and testing on yourself is tricky... But if you have a sample that you're convinced sounds better at high rates than lower rates, I would urge you to put it through a tool to resample it down to lower rates and see if/when you can tell the difference. If the rate isn't an even multiple, it's worth using a tool that can dither; dithered resampling artifacts are less abrasive than undithered... I had some voice recordings to play over the phone, and everything needed to be 8kHz u-law; the 48kHz original recordings sounded better than 44.1kHz original recordings because one is even multiple and the other isn't, but either way, the waveforms looked worse than it sounded.
> If the rate isn't an even multiple, it's worth using a tool that can dither; dithered resampling artifacts are less abrasive than undithered...
This seems to be mixing up two things; proper interpolation and dithering.
If you have limited bit depth (in practice, 16 bits or worse), you should pretty much always dither, ideally also noise shape. This is independent of the interpolation you're using; having a rational relationship between the original and downsampled signal makes some of the implementation a bit easier, but even for something like 48000 -> 24000, you'll end up with effectively a float signal that you need to convert to your chosen bit depth somehow, and that should be done better than just truncating/rounding.
And even for interpolating between two prime rates, or even variable-rate interpolation, you can and should get great interpolation (typically by picking out polyphase filtering coefficients from a windowed sinc of some sort).
Oh come on. I have handheld recorders that do 192khz.
"Headroom"
And the idea that humans can't hear over 20khz is like "humans taste 'sweet' on the tip of the tongue, and 'bitter' on the sides"
As we get older the hairs in out ears break or whatever and our perception decreases, but I could hear the fly backs in my old monitors, I used to be able to see the flicker in 3khz pwm LEDs, and my induction hob drives my kids crazy but it's merely midly annoying to me.
Get a real soundcard and some young people and play square(pwm) and sine tones starting at 16khz and find out where they can't hear it anymore. I find studio monitors with tweeters that are not paper are the best.
If you think you can hear ultrasound, it's nearly always due to nonlinearities in your system producing non-ultrasound when you try to play it. Seriously. (You can sometimes hear above-20 if it's very loud and/or you are pressing the source against your skull. Above-40 would be completely insane.)
The extra headroom can indeed be useful for some kinds of processing, but you can safely discard it for actual listening.
sampling theorem only applies to sine waves. the rate is bit like the order of (fourier)series expansion and so approximations deviate as rate reduces. how many orders is enough depends and is situational
ime 48/96/128kHz 16/24bit through a modern DAC and well warmed Class AB amp and barely more than okay headphones can be told apart in a double blind test
but you do need phile-enough gears(minus the gilded pebbles hot glued onto circuit breakers)
The economics probably aren't that great to send uncompressed voice into the data center. If you have a business that gets charged XX cents (or more likely .00XX cents) per GB of traffic and you can cut that in half by using compression... I think people will opt to use compression.
I have no problems with Opus or mp3 at low bitrate, in the same way I have no problems with microwaved food. I just think it's crazy that no one is doing 4Mbps audio while we're routinely streaming 20Mbps video as cheap distractions.
4Mbps audio doesn't really make sense when you can just about perfectly replicate any human-audible sound in ~1.4Mbps
EDIT: I do agree that lossless (or at least high bitrate modern lossy, like 256k Opus which is basically transparent) should be available in many more situations though.
For some context, wikipedia has a good table and diagram of codecs that do well at low bitrates, many of which are very old (G.722 from the late 1988, Speex from 2003, Opus from 2012 for a few examples)
I think it requires SDP negotiation between your UE, your EPC, their EPC, and their UE. Assuming all parties agree on the use of AMR-WB, then you're good-to-go.
That 128Kbps on ISDN was the like the gold standard back then! I knew some sysadmins who had that installed to their homes so they could be available at any time. All paid for by the company they worked for.
The most bang for buck "employee benefit" I ever offered to my guys back in the earl 00's was a T1 line to their home for free.
We could do this since local loops to most folks were about $150-200/mo, and we already had a channelized DS3 terminated at our rack at a local datacenter for our phone banks. If you bought your own DS1 retail you'd be paying upwards of $1k/mo back then to a provider.
It was by far the best "stickiness for dollar" investment into employee benefits I've ever found back then or since.
I worked for an ISP back in around '98, and they offered to let me terminate a T1 there if I covered the line. I got a contract from the phone company for $205/mo for the T1 and got it all set up, but they were billing me $250/mo for the line. A couple months in DSL dropped, and I got out of the T1 contract because they said they couldn't honor the contract they wrote because $250/mo was the regulated minimum price, and I got 768K DSL for $70/mo. At the time everyone was speculating that DSL wouldn't work (crosstalk across pairs as multiple people in a bundle use DSL will cause it to fail), but they were quickly proven wrong.
I had 1.5 Mbps DSL in 1999, and I think nearly all of my co-workers either had that or a cable modem with similar speed. I think T1s were like 5x the price for the same download speed.
What market did you have 1.5mbit dsl in, in 1999? GTE in LA county offered 768k/768k symmetric. In 1999. I can't remember when that was increased, but it was after GTE got bought out.
I was one of those SAs, circa 1998-2000. We also had an on-call kit with a Nokia Communicator so we weren't completely stuck at home while on-call.
Fast-forward to 2025, and I now have dual 1Gbps symmetric fiber connections (AT&T, GFiber) into my home from opposite sides of the house. (It's totally gratuitous and I'll probably cancel GFiber in a few months, but I wanted to have it wired up so I could more quickly start service in the future.)
Ha! My mom worked as a build engineer at BNR (later Nortel) in the 1990s and we got a free 2nd phone line for her to dial into work (99% used by me and my brother for internet). She could have gotten ISDN had it been available in our small town, but alas...
One of my internships in college was at Sun Microsystems in the org that provided this connectivity to employees. My job was to automate pushing updates to connection software and modem firmware down to clients, but I ended up doing a lot of technical support as well.
Indeed ISDN was amazing for its time but Ma Bell and her successor ILECs were way too proud of it so in the end it went nowhere and never made them all that much money.
If they had gotten out of their own way when the internet came around they could have charged a small monthly fee to upgrade to a "digital phone line". Lots of people would have switched.
Both our ISP and the phone company charged for every BRI data "call" placed and for minutes connected rather than a flat rate on a business-class ISDN line. I discovered IBM AIX's web browser package contained a telemetry background process that dialed home to Big Blue every 2 hours, causing our Cisco 1604 router to dial-on-demand even when no one was in the office. A simple deny IP ACL fixed that problem. There was no opt-out and no mention of it, and it never became a story that it should've been.
As a kid, we didn't have ISDN, but we did have a second phone line dedicated to the modem.
My dad ran a BBS from like 1992 to 1995, which started falling out of favor, especially as the users were getting more busy signals because the modem phone line was tied up with the internet connection.
I had made a small career on building Internet Service Providers in California, in the early days, and will never forget how liberating it was to carry my laptop to the Griffith Park observatory with a fully-charged Ricochet modem plugged in, communicating to my house down in Los Feliz, where another Ricochet modem gateway'ed me to the Internet via the house 56k line ..
It was truly astonishing to be up there, checking email.
A few, what seems very short, years later .. and now it is just normal.
Isn't it the opposite of the headline? Modems were hamstrung by digital phone lines you didn't know we had. One final trick allowed them to match, but not exceed, those digital lines.
To me it looks like the article is largely told from the perspective of ISPs connecting themselves between each other, and how the constant analog/digital conversions between carriers were causing problems sustaining the 56k data rate that the (analog) last mile was always capable of... and how converting their internal systems and backhauls to digital solved that issue.
Exactly. More specifically, Internet servers in data centers connected themselves to the IP network, not the phone network (not surprisingly), so they almost always had an uplink significantly faster than 56 Kb/s. And the analog local loop had encodings that could transmit at 56 Kb/s. But these encodings were not compatible with the analog-digital conversion used in phone network backbone.
While today you'd probably expect that the IP network could connect to every phone branch office where the local loops were connected to the phone network, that wasn't necessarily the case - Internet data would usually have to travel some of its way over the phone network backbone, with the problematic digital encodings. V.90 and related standards allowed the phone network to accept the digital data directly from ISPs and send it in digital form to the branch office, without attempting a digital to analog to digital conversion to inject it as digital voice into the phone network. That's why the upload speed couldn't be improved via this method - it would still need to undergo analog-digital conversion to travel across the phone network to the ISP where it could enter the IP network. (V.92, a later standard, improved upload speed to 48 Kb/s via fancier signal processing trickery that could survive the digital voice conversion.)
All the early metropolitan or long-haul fiber (mostly SONET) networks were digital aggregations of various circuit-modes (DS*) in those days. It made sense since the phone network was pretty much the only long haul network around and even the pure-IP networks didn't yet have enough of a market for alternative protocols. I've been out of the core-networking loop for awhile, but my understanding is that most modern long-haul networks are ethernet over OTN.
The phone companies had enormous sway over the development of these longer haul protocols. The debate around packet sizing almost always favoured smaller cells (especially ATM), which was more ideal for voice - with the added overhead for more standard IP packets. They were also often very connection-oriented, with all the extra equipment overhead required.
Circuits meant for IP didn't convert digital to analog and back unless they had to share the line with a phone at one end. If you had digital connections to both ends of a digital circuit, you'd use all the bits for data.
The analog lines before digitalization were limited to 8kHz bandwidth. This is because to deliver a circuit between two points, the system generally figured out which bit of bandwidth wasn’t in use and frequency shifted the entire connection into that band.
Yeah kind of. The u-law sampling of audio to produce the 64kbit audio channels in ISDN isn’t good for encoding a signal created by a modem.
But if it was a direct copper pair end-to-end then attenuation and electrical other characteristics would have made it hard to achieve the higher speeds, this is the Shannon limits they mention.
I have to wonder if the cap (theoretical) on the copper wires was more because of the technology standards in play at the time. Surely the copper wires could have handled more if they did not have to carry voice communication (with the old tech specs of the time) any longer?
Ok. Searched around.
Here is an article that states old copper could have carried 1 gigabit.
The practical implementation of this (at the time) was ADSL, HDSL, IDSL, and SDSL. Those technologies all took one or two copper pairs, and terminated them on a DSLAM (or similar device in the case of HDSL) instead of on a telephone switch. For physical pairs connected to an analog voice port on a telephone switch, between the band pass filtering and DSO coding, you were never going to get more than 56kbps. The xDSLs could get between 144kbps to a several mbps in practice, depending on the variant and line conditions.
Keep in mind that at the time, LAN speeds over controlled twisted copper pairs over short distances (100m) were 100mbps - 1gbps.
If you've ever seen the physical condition of the telephone company's outside subscriber wiring (what they call "outside plant") -- and particularly the intermediate splices between central office and subscriber -- you would quickly disabuse yourself of the notion that you could transmit anything close to 1gbps over a twisted pair.
Those copper wires ran from your house to the local central office, the "last mile" of the connection. (which was sometimes 2-3 miles long)
A quick read of the linked article seems to indicate that it's BS, as it doesn't account for the real topology of the local loop. In particular, in older neighborhoods you had a bundle of pairs going down the street, and a new connection was made by patching in to a free pair, creating a "T" shaped circuit. When a house was disconnected, part of this "stub" might have been left attached; over time a single pair might accumulate multiple disconnected stubs. The capacity of that copper circuit is far lower than a straight run.
In addition in many cases corrosion and water cause noise, further reducing bandwidth - I can remember having noise so bad on rainy days that I had to call and get them to fix it. (I assume they patched us onto a free pair and abandoned the noisy one)
Of course none of this is related to the end-to-end bandwidth of the old telephone system. Starting in the 50s a longer-distance phone call would get a single-side-band channel on a microwave link, with about 3KHz allocated. Later on calls got sampled at 8KHz with 8-bit mu-law (logarithmic) encoding, or A-law in Europe, and transmitted digitally.
Of course ancient telephone wiring can carry 1 Gbps. The real question you always, always, always, need to be asking yourself is:
Over what distance?
Make that distance short enough, as has happened with FTTN, or FTTC deployments in a whole heap of places, you're basically building a network that's, and I'll keep this very brief, subpar.
Since you mentioned a UK context there, Openreach rolled out an upgrade that kept the last mile of copper but now just about a decade later they're rolling out Full fibre. Whatever argument copper had, it went out the window near enough a decade ago.
OK, had a look through the linked paper. The big graphs, on page 8, tell you that if you decrease the twist length - which would entail relaying all the copper in the entire network, at which point you may as well put down fibre instead - you will get -20 dB at 10 GHz over a distance of 0.5m, 50cm, less than two feet, instead of -25 dB in the worst case.
In other words, instead of losing 99.7% of the signal over that distance, it'll only lose 99% of the signal. Sure, it helps, but consider me underwhelmed.
Distance is the problem. The gauge is small so we can't throw a very strong signal down the wire. So you have repeaters on almost every span of any appreciable length.
The very first part of a dialup modem sound? Where it's playing a tone that reverses phase at regular intervals? That tone is actually designed to disable all the repeaters and echo cancelers that are in your switched circuit.
Also two parallel phone lines are prone to capacitive coupling. I had a case so bad once that one office could pick up the phone and nearly perfectly couple onto their neighbors line and hear all their conversations. It was a 50/50 which port on the PBX recognized the tones and started the call when either of them picked up to dial out.
it can carry more, the whole value proposition of dsl was a high speed link over existing cabling, I think the dial up limitation is what speed can you sneak over the existing speech focused analog signal processing equipment. where as the article explained by making that analog link as short as possible it could improve speeds quite a bit. dsl was what you could achieve over the same lines when you were not forced to constrain your signal to speech frequencies
I remember the day we got 56K from our ISP. Our modem was 56K (no idea what model, some internal WinModem), but the ISP was limited to 33.6K. Then one day, the connection screech sounded different, and I excitedly pointed this out. Several seconds later, it connected, and lo and behold, we had 66% faster downloads.
The fact that all of this worked continues to amaze me, but then, so do mobile phones. I understand at a high level how CDMA works, but it’s just so insane…
I remember some ISPs allowing you to "shotgun" two 56k modems for double the speed!
Like some other commentors I also fondly remember ISDN. Overall I found it to be finicky. Sometimes one channel would just drop, even if a phone call wasn't coming in. And, in order to use a traditional analog phone with your ISDN line, you needed a special powered "TA" adapter or the phone wouldn't ring when a call came in.
Now I'm curious about how this worked in my city where we most definitely didn't have anything digital to our phone system at the time. As in, you had to use pulse dialing, and sometimes, rarely, your calls would glitch such that you would hear someone else talking over your call. Yet I remember consistently getting 40-something kbit over that.
Around 2000, I saw a crew pulling new phone lines through the neighborhood because everybody was getting a second line and they were running out, but even after switching to the new copper, we were still stuck at 26400. 20+ years later, it looks like 25/5 ADSL is now available at that address, so the new copper wasn't a complete waste.
More than one A-to-D conversion would knock it down to no more than 33.6. Pair gain units would cause problems. Bridge taps and load coils could also be problematic, but that was much more of a concern on DSL. Older amplifiers had very narrow filters and would also cause slow speeds. Echo cancellation hardware would ignore the in-band signal to get out of the way and cause problems.
As there was no legal compulsion to get them to act Bellsouth wouldn't do anything to help slow connect speeds for internet dialup. The trick was to lie and say you were having problems sending a fax, then they were required to act. They wouldn't even worry about testing first, as it was quicker to just re-engineer the line to the best practices of the day.
I worked tech support at an ISP. Occasionally customers would call in from their brand new subdivision to complain about our crummy service. “This house is brand new and we only get 26.4. What’s wrong with your system?”
And then I had the satisfaction of explaining that Ma Bell cheaper out and used pair gains to build out their block cheaply. Sorry, call them to complain, or move.
Yes these modems were almost-ISDN (minus the razor fast call setup). And required a full digital backend to work. They could only do 56k6 in one direction, to the user too. But they were made for internet access so that didn't really matter.
ISDN had much, much better latency than even 56k modems. Modems were around 150ms minimum. ISDN was often in the sub 20 ms range. This made a big difference for chatty protocols like HTTP, telnet, etc.
True, though in those days that didn't really matter except for gaming. The bandwidth was so low and the web going in content so fast that the local uplink bandwidth was usually congested causing delays due to queueing and pending acknowledgements. Switches and routers weren't quite as optimised for low latency either. A lot of local networks were still 10base2 (or 10baseT with hubs, not switches) so collissions added further latency.
What I loved the most about ISDN was the quick call setup. Took like 1 second max and you had a 64kbit channel. 56k modems went through a dialling phase, a connection phase, endless handshaking...
Or gaming. I never got good at quake because of it. IIRC they added hardware compression to the later versions (I think starting at 33.6?) that added latency, too.
I remember my brothers friend in rural Portugal having one way satellite Internet back in the 90s to very early 00s - you used a standard dial up for the upstream, but with a satellite dish got much much faster downloads. Blew my mind that you could go out one way and receive another and still get a functioning (and fast) connection.
Hey, in early 00 I briefly worked for an ISP trying to do rural Portugal/Spain satellite internet (to exploit rather large subsidies offered at the time)! Afair it was 1-2 Mbit downstream for all subscribers in one area from geo stationary sat with 500-1000ms pings. Almost unusable for normal browsing. Company was from defense background and from my limited understanding at the time they were piggybacking on some military leftovers tech. Idea was to use commercial Wifi to link local customers to central location with some magic proxy server trying to hide all that latency.
The first cable Internet service I had was telephone return. Downstream was over the cable modem at ~512Kbps (IIRC) but upstream was over a dial-up modem.
It was cool having a fast downstream but the slow upstream over finicky dial-up was a pain in the ass. If the dial-up dropped the in progress downloads all died because no ACKs could be sent. Gaming was no better than plain dial-up since your upstream had the same shitty latency as plain dial-up.
Yup. I worked on the "Rapport" series of switches at Bell Canada. It was DS1 (Digital Signal 1) out one end and a rack full of Zyxel modems on the other side. The idea was RBOCs (Regional Bell Operating Companies) would put these in their CO (Central Office) and terminate 56k modem signals over the analog "last mile" loop to the customer premises and then do Frame Relay over the phone company's data lines to your ISP.
I know Southwest Bell bought a number of them and stuffed them in a closet north of downtown Dallas. During the install I remember having to explain what Ethernet was to their techs. They were EXCELLENT at phone standards, but had decided the data world was threatening and were determined to never learn anything about it.
I know that between around '93 and '97 if you dialed AOL from D/FW there was a good chance your call would be terminated somewhere within a mile or two of your house and the bits flowing between your Compaq Presario and AOL would be sent digitally from the local CO to AOL's data center in Sterling, VA.
This line of business was (of course) destroyed by consumer DSL and cable modems, but for about 5 years it was fairly popular with the phone companies. ISDN at the time was a bit pricey for most households and a modem is a one-off purchase. Most people I knew using things like AOL or CompuServe were using a hand-me-down 36k modem on a crappy 33MHz 486sx running DOS / Win3.1 / Win95 and were fairly cost-sensitive.
Revealing my ignorance here, but was (is?) there a telephone equivalent of anycast such that, say, the 1-800-... Or 1-900... numbers would be routed differently based on location? My basic knowledge of phone systems suggests it would at least be possible.
Yes, the Intelligent Network was the big thing in the 1990s. It allowed for routing as you describe as well as calling cards and many other features: https://en.m.wikipedia.org/wiki/Intelligent_Network
Oh the joys of watching terminal logs with SS7 data frames!
Was there any way for the customer to somehow interact with SS7 frames?
We’ve had techs come to our home in Canada in the 1990s, and I remember being fascinated with their mystical toolbox phone that seemed to uncover hidden phone line functionality. Almost like the whip in Indiana Jones.
I don't think so, but who knows. In my case I was working at a telco so everything was very obvious at the dev environments.
Yes, but it would usually be based on the first 3 or 6 numbers of your phone number. The first 3 are of course “area code” and the 6 are called “NPA-NXX”. This has blurred some due to line number portability and cell phones.
Absolutely, when call hits local switch it can be terminated differently based on its location. Particularly pertinent for modem based services in the 90s. A single national dialup number would terminate on 100s of local pops and routing decisions would be done to keep traffic as local as possible.
Internet Thruway from Nortel allowed multiple ISPs to use the same local termination hardware; If I remember the details correctly, different national numbers could be terminated on the same box, with the subsequent IP traffic routed to the correct ISP.
Yes, at the very least 911 (US) / 112 (EU) run on that system. For SIP numbers routing depends on the address you set up in the phone provider's portal, so if you're using bring-your-own-SIP to provide landline phone service in your house, you absolutely have to keep the address current or you risk dialing 911 and ending up on the dispatch of your old addres...
Wikipedia has good info on how RespOrgs handle this:
https://en.wikipedia.org/wiki/Toll-free_telephone_numbers_in....
Not in the US, but we had a country-wide number that each local station simply routed to a local extension. Sometimes, even multi-line wasn’t available at some stations, so it was just a sequential list of extensions to try. Looks like a hack, but it worked flawlessly.
911 and 0 (operator).
> DS1 (Digital Signal 1) out one end and a rack full of Zyxel modems on the other side
Why use real physical modems when you already have subscribers signals converted to convenient digital form in DS1 bundle? Wouldnt it make more sense to put a box with one fat DSP doing 24 modems all in bulk inside a box with DS1 and Ethernet sockets at the ISP location instead?
You're thinking about the Livingston/Lucent Portmaster 3. https://osmocom.org/projects/retronetworking/wiki/Livingston...
These were great boxes, and the only way you could get 56K was to call into an ISP with one of these or similar on their end -- the trickery that allowed 56K relied on one end being fully digital.
I was working for an ISP around that time and we had a bunch of Portmaster 2s connected via RS-232 cables to piles of modems, some rackmount some just stacks and stacks of US Robotics Sportsters. Sometimes modems would get wedged and we'd have to reboot them or "busy out" the line that they were on. Harder for the modems that were an hour away.
When the transition happened we were able to get rid of all those wires and just plug in one small phone cable for the T1, another for Ethernet, and terminate 23 lines. The Portmaster would treat all the modems as a pool and route calls to whichever was available, and once a call was done would run some testing on the modem before putting it back into the pool. It was like a space age rocket ship! At one point I was driving around with $50K worth of Portmasters in the trunk of my car, hoping I didn't get rear-ended. They were not at all cheap, but they were worth it.
Similar story with Portmaster 2s and a wall of modems layed out. The resulting blinking lights acted like a load monitor of sorts as the activity would spread across the wall as customers dialed in after work and signed off at night. Not mention a wall of flashing red lights made a pretty good picture of ‘the internet’ for those just starting out on this adventure in 1997.
Oh man. I was talking to my wife’s acquaintance and he was excited to talk shop when he found out I was technical. He worked at “this small company you’ve never heard of, Livingston.” “As in, Portmaster?” “You know about that?!”
Yeah, friend. I’m very familiar, and it was amazing tech. It sure kept the data center cooling system busy, though.
This did happen eventually. In the late 90's, various companies (Cisco, Ascend) provided boxes that could handle 24 modems on a single T1 port (PRI or channelized T1.) This massively improved ISP port density. Before that, it was racks and racks of modems...
IIRC it was the only way to support 56k.
Yes! The ISP side needed to be digital to get a 56K connection.
I was working for one of the telco equipment firms around that time. We made a box that would terminate TDM trunks and had it in the lab.
I was installing Windows 2000 on a PC in the lab (manual disk swapping required) when, hidden behind another rack, several shelves full of physical modems all started calling the box at once, speakers on.
I am trying to imagine the racket that must have made. Amazing how vivid that modem sound is in memory.
Compaq Presario catching strays.
Isn't DSL basically the same thing? If you live in a jurisdiction with unbundled local loops (which I suspect you do not), your phone company terminates your DSL at their nearest "office" (often a roadside equipment cabinet nowadays) and then does Metro Ethernet over the phone company's data lines to your ISP.
If you don't live in that jurisdiction, it's the same but your ISP and your phone company are the same company.
Also you don't dial your ISP with a number.
To be clear, this was well known at the time. It was advertised that 56k was for download only and required an ISP that supported it. For those living in rural areas, there often weren’t any local options (long distance was certainly not free in those days). But for those who could get it, it was definitely a big improvement.
TIL why upload on 56k modems were capped on 33.6k. I always wondered about that. Super interesting stuff!
I also remember back in the day that my 56k modem would often only connect at like 48k or so, especially when it was raining. I guess living far out from the city made the connection more noisy?
That reminds me of my short time with ISDN and the amazing availability to receive calls on the second line without forcing me offline.
I'm still mad that they high-quality voice audio possible through ISDN didn't become ubiquitous. It's ridiculous to hear the clipped frequency range of a plain old telephone line during radio interviews in 2025.
I'm even more mad that sound quality in radio interviews actually decreased. People used to use landlines for calling, given that very recognizable sound "quality" (which is of course nowhere close to what ISDN could do). Now as (almost) everybody uses cell phones, quality is sometimes good, sometimes very poor depending on network conditions. It's not rare to hear the consequences of dropped packets. Sometimes it also sounds bad with apparently no drops, not sure why: maybe are people using the phone in "hands-free" mode so they don't need to keep it in their hands?
I think it was ubiquitous. NPR in the 90s and 2000s used ISDN to allow many of their commentators to work from home. I think where you hear those crazy 8KHz clipped calls is where it's not an option. Mostly these days remote radio and podcast interviews I hear remote participants sound more like they're on mobile phones: variable voice quality with an almost unbearable latency.
Every once in awhile the stars align and I randomly get an "HD" voice call. It's disorienting to both me and the other party how good the quality is.
HD Voice is genuinely what it's called:
https://www.microsoft.com/en-us/microsoft-365/business-insig...
They really do sound alot better. It always reminds me of the first time I ever made a FaceTime call, in 2010, and the high quality audio was just as interesting as the video.
It is the reason why we switched to using FaceTime audio. The sound quality is so much better than over the normal voice line. I don’t know how to reliably get HD Voice.
Another thing ruined by the cell companies. They should be dumb pipes that don't even know what they're carrying.
Apparently it was still used for commentators at the BBC until 2023 https://www.linkedin.com/pulse/ode-isdn-paul-furley/
ISDN was just completely phased out a year or two ago in the US, too.
Probably depends on the radio station and area but I work in one and we're actually using something much higher quality for remote radio, for example: https://www.prodys.net/portable-codecs-audio/quantum-lite-2/
If course it has to be pre-planned, someone needs to have the hardware with them. So sometimes there's a spontaneous connection over normal mobile phone. That's something that everyone has with them at all times.
I had ISDN for around $75/mo in '94 or '95, flat rate for "local" calls. It was fantastic! But then the phone company switched it to a metered service (in Omaha Nebraska, USA) and the price would have been over $300/mo after that change. I was grandfathered in, as long as I didn't move.
I was talking to a sales rep, at the time I worked for USWest or QWest, whatever they were called at that time, which may have helped, and the sales rep told me "we are being told to actively discourage people from buying ISDN".
I get the impression that the phone company hated consumer modem use of any kind, because it tied up CO equipment 24x7, and they liked the returns on investment they got with people paying $25/mo for resources that were used an hour or less a day, sometimes with extra revenue from long distance calling. And ISDN was just another representation of that.
24 [bit] * 192 [kHz] = 4.608 [Mbps]. Maybe not sensible to do so, but many people could have Discord calls in uncompressed "high-res" WAV. It's crazy that there aren't even 16bit/44.1k modes in most voice call apps.
That's an absurd bit rate for humans, especially talking about voice.
24-bit samples is ridiculous overkill. That's a huge dynamic range that's completely unnecessary.
At 192KHz you'd be able to capture 96KHz signals, far, FAR outside the range of human hearing. Human hearing peaks at 22KHz so you only need a sample rate of 44KHz to capture the total range of human hearing.
For human voice you don't really need better than 16-bit samples at 12KHz or so. That's for great quality voice.
The only reason audio mastering is done at huge sample sizes and sampling frequencies is to prevent aliasing during mixing and to preserve higher frequency harmonics. There's absolutely no need for such rates delivering to human beings.
Also higher fidelity audio sampling is available for phone calls. The issue is more political than technical. Cellular carriers don't like to negotiate higher quality calls between one another so inter-carrier calls tend to fall back on the lowest common denominator AMR-NB codec. Intra-carrier calls don't even reliably pick AMR-WB let alone EVS available with VoLTE.
There is zero point in sampling higher than 48khz. That captures all frequencies up to 20khz or so. 192khz is just a waste of bandwidth for no actual gain.
192 kHz? That makes no sense, unless you want to have a telephone conversation between bats.
192 kHz is the sampling rate, not the sound frequency.
That still makes no sense, unless you want to have phone calls between bats.
I just want smooth 90s jazz hold music to sound ok.
nyquist-shannon means that you only need to sample at 192khz if you need to encode signals up to 96khz.
humans can’t hear above 20khz. adult humans can’t hear above 16khz or so, we lose the top end before age 20. this means that the standard 48khz sampling rate covers the entire human hearing range and then some (0-24khz). any sampling rate over 48khz for sound intended for human hearing is a total waste.
Why do 96 or 128khz sampled audio files sound better than 48khz ones? I blind tested and could always tell the difference between them, but not between 128 and 192
Typically, high sampling rate files are part of a different mastering process than what is published as a 44.1kHz cd audio or 48kHz dvd audio.
Also, you might possibly be sensitive to resampling artifacts if your output device runs at 44.1kHz and your file is 48kHz or vice versa.
Audio testing is hard, and testing on yourself is tricky... But if you have a sample that you're convinced sounds better at high rates than lower rates, I would urge you to put it through a tool to resample it down to lower rates and see if/when you can tell the difference. If the rate isn't an even multiple, it's worth using a tool that can dither; dithered resampling artifacts are less abrasive than undithered... I had some voice recordings to play over the phone, and everything needed to be 8kHz u-law; the 48kHz original recordings sounded better than 44.1kHz original recordings because one is even multiple and the other isn't, but either way, the waveforms looked worse than it sounded.
> If the rate isn't an even multiple, it's worth using a tool that can dither; dithered resampling artifacts are less abrasive than undithered...
This seems to be mixing up two things; proper interpolation and dithering.
If you have limited bit depth (in practice, 16 bits or worse), you should pretty much always dither, ideally also noise shape. This is independent of the interpolation you're using; having a rational relationship between the original and downsampled signal makes some of the implementation a bit easier, but even for something like 48000 -> 24000, you'll end up with effectively a float signal that you need to convert to your chosen bit depth somehow, and that should be done better than just truncating/rounding.
And even for interpolating between two prime rates, or even variable-rate interpolation, you can and should get great interpolation (typically by picking out polyphase filtering coefficients from a windowed sinc of some sort).
Oh come on. I have handheld recorders that do 192khz.
"Headroom"
And the idea that humans can't hear over 20khz is like "humans taste 'sweet' on the tip of the tongue, and 'bitter' on the sides"
As we get older the hairs in out ears break or whatever and our perception decreases, but I could hear the fly backs in my old monitors, I used to be able to see the flicker in 3khz pwm LEDs, and my induction hob drives my kids crazy but it's merely midly annoying to me.
Get a real soundcard and some young people and play square(pwm) and sine tones starting at 16khz and find out where they can't hear it anymore. I find studio monitors with tweeters that are not paper are the best.
If you think you can hear ultrasound, it's nearly always due to nonlinearities in your system producing non-ultrasound when you try to play it. Seriously. (You can sometimes hear above-20 if it's very loud and/or you are pressing the source against your skull. Above-40 would be completely insane.)
The extra headroom can indeed be useful for some kinds of processing, but you can safely discard it for actual listening.
sampling theorem only applies to sine waves. the rate is bit like the order of (fourier)series expansion and so approximations deviate as rate reduces. how many orders is enough depends and is situational
No, it does not.
Did you generate the 48khz ones from the 96/128khz ones via downsampling or are they different recordings?
Are they the exact same volume? We perceive things slightly louder as higher quality.
Is it a double blind test, ie an ABX test?
Are the bit depths the same? Many 96khz sampled files use 24 bits per sample, whereas 48khz usually uses 16 bits per sample.
ime 48/96/128kHz 16/24bit through a modern DAC and well warmed Class AB amp and barely more than okay headphones can be told apart in a double blind test
but you do need phile-enough gears(minus the gilded pebbles hot glued onto circuit breakers)
You only need low THD and accurate clocks.
The economics probably aren't that great to send uncompressed voice into the data center. If you have a business that gets charged XX cents (or more likely .00XX cents) per GB of traffic and you can cut that in half by using compression... I think people will opt to use compression.
Speech compresses really well, with modern techniques you can get it down a lot.
The big issue with analogue landline phone calls is the audio bandwidth is so limited. It's not the full frequency spectrum, most of it it cut off.
I have no problems with Opus or mp3 at low bitrate, in the same way I have no problems with microwaved food. I just think it's crazy that no one is doing 4Mbps audio while we're routinely streaming 20Mbps video as cheap distractions.
4Mbps audio doesn't really make sense when you can just about perfectly replicate any human-audible sound in ~1.4Mbps
EDIT: I do agree that lossless (or at least high bitrate modern lossy, like 256k Opus which is basically transparent) should be available in many more situations though.
For some context, wikipedia has a good table and diagram of codecs that do well at low bitrates, many of which are very old (G.722 from the late 1988, Speex from 2003, Opus from 2012 for a few examples)
https://en.wikipedia.org/wiki/Comparison_of_audio_coding_for...
https://en.wikipedia.org/wiki/Opus_(audio_format)#Quality_co...
You have an eye for quality. They should make an Apple voice dongle for that. Inside joke. Signing off.
Modern cellphone networks support wideband codecs, e.g. AMR-WB.
Is that working across networks, though?
I think it requires SDP negotiation between your UE, your EPC, their EPC, and their UE. Assuming all parties agree on the use of AMR-WB, then you're good-to-go.
Also, you may never beat it’s low latency / jitter anymore.
I thought VoLTE was supposed to fix that, at least for mobile phones.
That 128Kbps on ISDN was the like the gold standard back then! I knew some sysadmins who had that installed to their homes so they could be available at any time. All paid for by the company they worked for.
The most bang for buck "employee benefit" I ever offered to my guys back in the earl 00's was a T1 line to their home for free.
We could do this since local loops to most folks were about $150-200/mo, and we already had a channelized DS3 terminated at our rack at a local datacenter for our phone banks. If you bought your own DS1 retail you'd be paying upwards of $1k/mo back then to a provider.
It was by far the best "stickiness for dollar" investment into employee benefits I've ever found back then or since.
I worked for an ISP back in around '98, and they offered to let me terminate a T1 there if I covered the line. I got a contract from the phone company for $205/mo for the T1 and got it all set up, but they were billing me $250/mo for the line. A couple months in DSL dropped, and I got out of the T1 contract because they said they couldn't honor the contract they wrote because $250/mo was the regulated minimum price, and I got 768K DSL for $70/mo. At the time everyone was speculating that DSL wouldn't work (crosstalk across pairs as multiple people in a bundle use DSL will cause it to fail), but they were quickly proven wrong.
I worked at an ISP that gave dedicated ISDN and 56k frame relay lines as a benefit in the late 90's. T1 would've been amazing!
I had 1.5 Mbps DSL in 1999, and I think nearly all of my co-workers either had that or a cable modem with similar speed. I think T1s were like 5x the price for the same download speed.
What market did you have 1.5mbit dsl in, in 1999? GTE in LA county offered 768k/768k symmetric. In 1999. I can't remember when that was increased, but it was after GTE got bought out.
Calgary. I remember it being asymmetric, probably 256 or 512 up.
Cadvision? 3mbps cable (sadly Rogers) wasn't impossible at the time.
Nucleus!
You know they're still around... somehow? :| The website makes their age clear: https://www.mynucleus.ca/
Back in the day people on forums would list T1 line in their signature. My ISDN line was not worthy!
I used to use 128Kbps bonded ISDN at home through BT Home Highway and an ISP called Red Hot Ant who had fixed price unmetered internet access.
And I accessed the heck out of that connection (until the ISP went bust, wonder why?), and was very much a Q3A LPB during that time.
I was one of those SAs, circa 1998-2000. We also had an on-call kit with a Nokia Communicator so we weren't completely stuck at home while on-call.
Fast-forward to 2025, and I now have dual 1Gbps symmetric fiber connections (AT&T, GFiber) into my home from opposite sides of the house. (It's totally gratuitous and I'll probably cancel GFiber in a few months, but I wanted to have it wired up so I could more quickly start service in the future.)
There was also that period of time when Ricochet was available in some places.
https://en.wikipedia.org/wiki/Ricochet_(Internet_service)
Wireless 56k baud. So you could take your luggable laptop circa 1994 with you and dial in to work... given you lived in SF.
Oh man I forgot about Ricochet. I remember they had service in NYC too, and it turns out also in a few other big markets. https://www.nytimes.com/2001/08/16/technology/ricochet-netwo...
Ha! My mom worked as a build engineer at BNR (later Nortel) in the 1990s and we got a free 2nd phone line for her to dial into work (99% used by me and my brother for internet). She could have gotten ISDN had it been available in our small town, but alas...
> All paid for by the company they worked for.
One of my internships in college was at Sun Microsystems in the org that provided this connectivity to employees. My job was to automate pushing updates to connection software and modem firmware down to clients, but I ended up doing a lot of technical support as well.
I had one for work as well in the late 90s.
The other (often overlooked) benefit that ISDN provided was 24/7 connectivity in an age of dialup.
Oh and you could spoof your outgoing phone number for caller ID ]:D
Yes but with channel bundling you also had to pay twice. One channel was only 64k.
Indeed ISDN was amazing for its time but Ma Bell and her successor ILECs were way too proud of it so in the end it went nowhere and never made them all that much money.
If they had gotten out of their own way when the internet came around they could have charged a small monthly fee to upgrade to a "digital phone line". Lots of people would have switched.
Both our ISP and the phone company charged for every BRI data "call" placed and for minutes connected rather than a flat rate on a business-class ISDN line. I discovered IBM AIX's web browser package contained a telemetry background process that dialed home to Big Blue every 2 hours, causing our Cisco 1604 router to dial-on-demand even when no one was in the office. A simple deny IP ACL fixed that problem. There was no opt-out and no mention of it, and it never became a story that it should've been.
As a kid, we didn't have ISDN, but we did have a second phone line dedicated to the modem.
My dad ran a BBS from like 1992 to 1995, which started falling out of favor, especially as the users were getting more busy signals because the modem phone line was tied up with the internet connection.
I had made a small career on building Internet Service Providers in California, in the early days, and will never forget how liberating it was to carry my laptop to the Griffith Park observatory with a fully-charged Ricochet modem plugged in, communicating to my house down in Los Feliz, where another Ricochet modem gateway'ed me to the Internet via the house 56k line ..
It was truly astonishing to be up there, checking email.
A few, what seems very short, years later .. and now it is just normal.
Isn't it the opposite of the headline? Modems were hamstrung by digital phone lines you didn't know we had. One final trick allowed them to match, but not exceed, those digital lines.
To me it looks like the article is largely told from the perspective of ISPs connecting themselves between each other, and how the constant analog/digital conversions between carriers were causing problems sustaining the 56k data rate that the (analog) last mile was always capable of... and how converting their internal systems and backhauls to digital solved that issue.
Exactly. More specifically, Internet servers in data centers connected themselves to the IP network, not the phone network (not surprisingly), so they almost always had an uplink significantly faster than 56 Kb/s. And the analog local loop had encodings that could transmit at 56 Kb/s. But these encodings were not compatible with the analog-digital conversion used in phone network backbone.
While today you'd probably expect that the IP network could connect to every phone branch office where the local loops were connected to the phone network, that wasn't necessarily the case - Internet data would usually have to travel some of its way over the phone network backbone, with the problematic digital encodings. V.90 and related standards allowed the phone network to accept the digital data directly from ISPs and send it in digital form to the branch office, without attempting a digital to analog to digital conversion to inject it as digital voice into the phone network. That's why the upload speed couldn't be improved via this method - it would still need to undergo analog-digital conversion to travel across the phone network to the ISP where it could enter the IP network. (V.92, a later standard, improved upload speed to 48 Kb/s via fancier signal processing trickery that could survive the digital voice conversion.)
All the early metropolitan or long-haul fiber (mostly SONET) networks were digital aggregations of various circuit-modes (DS*) in those days. It made sense since the phone network was pretty much the only long haul network around and even the pure-IP networks didn't yet have enough of a market for alternative protocols. I've been out of the core-networking loop for awhile, but my understanding is that most modern long-haul networks are ethernet over OTN.
The phone companies had enormous sway over the development of these longer haul protocols. The debate around packet sizing almost always favoured smaller cells (especially ATM), which was more ideal for voice - with the added overhead for more standard IP packets. They were also often very connection-oriented, with all the extra equipment overhead required.
Circuits meant for IP didn't convert digital to analog and back unless they had to share the line with a phone at one end. If you had digital connections to both ends of a digital circuit, you'd use all the bits for data.
The analog lines before digitalization were limited to 8kHz bandwidth. This is because to deliver a circuit between two points, the system generally figured out which bit of bandwidth wasn’t in use and frequency shifted the entire connection into that band.
Yeah kind of. The u-law sampling of audio to produce the 64kbit audio channels in ISDN isn’t good for encoding a signal created by a modem.
But if it was a direct copper pair end-to-end then attenuation and electrical other characteristics would have made it hard to achieve the higher speeds, this is the Shannon limits they mention.
I have to wonder if the cap (theoretical) on the copper wires was more because of the technology standards in play at the time. Surely the copper wires could have handled more if they did not have to carry voice communication (with the old tech specs of the time) any longer?
Ok. Searched around. Here is an article that states old copper could have carried 1 gigabit.
https://www.newscientist.com/article/2317040-ordinary-copper...
The practical implementation of this (at the time) was ADSL, HDSL, IDSL, and SDSL. Those technologies all took one or two copper pairs, and terminated them on a DSLAM (or similar device in the case of HDSL) instead of on a telephone switch. For physical pairs connected to an analog voice port on a telephone switch, between the band pass filtering and DSO coding, you were never going to get more than 56kbps. The xDSLs could get between 144kbps to a several mbps in practice, depending on the variant and line conditions.
Keep in mind that at the time, LAN speeds over controlled twisted copper pairs over short distances (100m) were 100mbps - 1gbps.
If you've ever seen the physical condition of the telephone company's outside subscriber wiring (what they call "outside plant") -- and particularly the intermediate splices between central office and subscriber -- you would quickly disabuse yourself of the notion that you could transmit anything close to 1gbps over a twisted pair.
If the copper isn't in good enough condition, you can always try with wet string instead:
https://www.revk.uk/2017/12/its-official-adsl-works-over-wet...
Those copper wires ran from your house to the local central office, the "last mile" of the connection. (which was sometimes 2-3 miles long)
A quick read of the linked article seems to indicate that it's BS, as it doesn't account for the real topology of the local loop. In particular, in older neighborhoods you had a bundle of pairs going down the street, and a new connection was made by patching in to a free pair, creating a "T" shaped circuit. When a house was disconnected, part of this "stub" might have been left attached; over time a single pair might accumulate multiple disconnected stubs. The capacity of that copper circuit is far lower than a straight run.
In addition in many cases corrosion and water cause noise, further reducing bandwidth - I can remember having noise so bad on rainy days that I had to call and get them to fix it. (I assume they patched us onto a free pair and abandoned the noisy one)
Of course none of this is related to the end-to-end bandwidth of the old telephone system. Starting in the 50s a longer-distance phone call would get a single-side-band channel on a microwave link, with about 3KHz allocated. Later on calls got sampled at 8KHz with 8-bit mu-law (logarithmic) encoding, or A-law in Europe, and transmitted digitally.
Of course ancient telephone wiring can carry 1 Gbps. The real question you always, always, always, need to be asking yourself is:
Over what distance?
Make that distance short enough, as has happened with FTTN, or FTTC deployments in a whole heap of places, you're basically building a network that's, and I'll keep this very brief, subpar.
Since you mentioned a UK context there, Openreach rolled out an upgrade that kept the last mile of copper but now just about a decade later they're rolling out Full fibre. Whatever argument copper had, it went out the window near enough a decade ago.
OK, had a look through the linked paper. The big graphs, on page 8, tell you that if you decrease the twist length - which would entail relaying all the copper in the entire network, at which point you may as well put down fibre instead - you will get -20 dB at 10 GHz over a distance of 0.5m, 50cm, less than two feet, instead of -25 dB in the worst case.
In other words, instead of losing 99.7% of the signal over that distance, it'll only lose 99% of the signal. Sure, it helps, but consider me underwhelmed.
Distance is the problem. The gauge is small so we can't throw a very strong signal down the wire. So you have repeaters on almost every span of any appreciable length.
The very first part of a dialup modem sound? Where it's playing a tone that reverses phase at regular intervals? That tone is actually designed to disable all the repeaters and echo cancelers that are in your switched circuit.
Also two parallel phone lines are prone to capacitive coupling. I had a case so bad once that one office could pick up the phone and nearly perfectly couple onto their neighbors line and hear all their conversations. It was a 50/50 which port on the PBX recognized the tones and started the call when either of them picked up to dial out.
They did it for efficiency. The observation was that the human voice doesn't use most of the audio spectrum, so they optimized everything for voice.
A reasonable decision at the time.
it can carry more, the whole value proposition of dsl was a high speed link over existing cabling, I think the dial up limitation is what speed can you sneak over the existing speech focused analog signal processing equipment. where as the article explained by making that analog link as short as possible it could improve speeds quite a bit. dsl was what you could achieve over the same lines when you were not forced to constrain your signal to speech frequencies
It can carry 1 gigabit, over a few metres. I.e. Not even the length from the street to your house.
Australia tried this, it's physically impossible.
https://en.wikipedia.org/wiki/G.fast
I remember the day we got 56K from our ISP. Our modem was 56K (no idea what model, some internal WinModem), but the ISP was limited to 33.6K. Then one day, the connection screech sounded different, and I excitedly pointed this out. Several seconds later, it connected, and lo and behold, we had 66% faster downloads.
The fact that all of this worked continues to amaze me, but then, so do mobile phones. I understand at a high level how CDMA works, but it’s just so insane…
I could tell what connection speed my modem was going to be by the sound of the handshake. There were distinct sounds for all the different modes.
I remember us getting our first modeum, it was 800 baud! Then we moved to 2400, 14.4, 33.6 and eventually all the way to 56k.
800 baud? Do you mean 300 baud? 800 baud is not a standard speed so if they did exist you'd have to supply both ends of the connection..
Oh yeah it was 300!
Darn, I was hoping it was some sort of oddball thing. Those are always interesting.
I remember some ISPs allowing you to "shotgun" two 56k modems for double the speed!
Like some other commentors I also fondly remember ISDN. Overall I found it to be finicky. Sometimes one channel would just drop, even if a phone call wasn't coming in. And, in order to use a traditional analog phone with your ISDN line, you needed a special powered "TA" adapter or the phone wouldn't ring when a call came in.
> Using the Digital Signal 0 (DS0) encoding
DS0 is not encoding. It's (pseudo) framing.
> phone calls became digital with
The G.711 encoding in either aLaw or muLaw format.
Now I'm curious about how this worked in my city where we most definitely didn't have anything digital to our phone system at the time. As in, you had to use pulse dialing, and sometimes, rarely, your calls would glitch such that you would hear someone else talking over your call. Yet I remember consistently getting 40-something kbit over that.
LZW?
Man I wish we still had slow connections, given all the dumb crap on webpages these days.
Was crazy to think about trying to get your page to load in less than 64k a few years back.
There was a time when 300 and 1200 baud modems cost about $1/baud.
Yes, and I provisioned them. You could shotgun them. Fun times
ah, good ol' capping out at 26.4kb/s...until 2005
We were stuck at 26400 until 2001, when DOCSIS reached the area. I recall hearing speculation that a https://en.wikipedia.org/wiki/Pair_gain was the cause.
Around 2000, I saw a crew pulling new phone lines through the neighborhood because everybody was getting a second line and they were running out, but even after switching to the new copper, we were still stuck at 26400. 20+ years later, it looks like 25/5 ADSL is now available at that address, so the new copper wasn't a complete waste.
More than one A-to-D conversion would knock it down to no more than 33.6. Pair gain units would cause problems. Bridge taps and load coils could also be problematic, but that was much more of a concern on DSL. Older amplifiers had very narrow filters and would also cause slow speeds. Echo cancellation hardware would ignore the in-band signal to get out of the way and cause problems.
As there was no legal compulsion to get them to act Bellsouth wouldn't do anything to help slow connect speeds for internet dialup. The trick was to lie and say you were having problems sending a fax, then they were required to act. They wouldn't even worry about testing first, as it was quicker to just re-engineer the line to the best practices of the day.
I worked tech support at an ISP. Occasionally customers would call in from their brand new subdivision to complain about our crummy service. “This house is brand new and we only get 26.4. What’s wrong with your system?”
And then I had the satisfaction of explaining that Ma Bell cheaper out and used pair gains to build out their block cheaply. Sorry, call them to complain, or move.
i recall some of the same speculation. in my case we wound up getting a satellite connection, with a wonderful 600ms round trip time.
Yes these modems were almost-ISDN (minus the razor fast call setup). And required a full digital backend to work. They could only do 56k6 in one direction, to the user too. But they were made for internet access so that didn't really matter.
ISDN had much, much better latency than even 56k modems. Modems were around 150ms minimum. ISDN was often in the sub 20 ms range. This made a big difference for chatty protocols like HTTP, telnet, etc.
True, though in those days that didn't really matter except for gaming. The bandwidth was so low and the web going in content so fast that the local uplink bandwidth was usually congested causing delays due to queueing and pending acknowledgements. Switches and routers weren't quite as optimised for low latency either. A lot of local networks were still 10base2 (or 10baseT with hubs, not switches) so collissions added further latency.
What I loved the most about ISDN was the quick call setup. Took like 1 second max and you had a 64kbit channel. 56k modems went through a dialling phase, a connection phase, endless handshaking...
Or gaming. I never got good at quake because of it. IIRC they added hardware compression to the later versions (I think starting at 33.6?) that added latency, too.
I remember my brothers friend in rural Portugal having one way satellite Internet back in the 90s to very early 00s - you used a standard dial up for the upstream, but with a satellite dish got much much faster downloads. Blew my mind that you could go out one way and receive another and still get a functioning (and fast) connection.
Hey, in early 00 I briefly worked for an ISP trying to do rural Portugal/Spain satellite internet (to exploit rather large subsidies offered at the time)! Afair it was 1-2 Mbit downstream for all subscribers in one area from geo stationary sat with 500-1000ms pings. Almost unusable for normal browsing. Company was from defense background and from my limited understanding at the time they were piggybacking on some military leftovers tech. Idea was to use commercial Wifi to link local customers to central location with some magic proxy server trying to hide all that latency.
I remember reading about this back in the day. I seem to recall the latency being a killer.
The first cable Internet service I had was telephone return. Downstream was over the cable modem at ~512Kbps (IIRC) but upstream was over a dial-up modem.
It was cool having a fast downstream but the slow upstream over finicky dial-up was a pain in the ass. If the dial-up dropped the in progress downloads all died because no ACKs could be sent. Gaming was no better than plain dial-up since your upstream had the same shitty latency as plain dial-up.