Path: utzoo!utgpu!water!watmath!clyde!bellcore!faline!thumper!ulysses!ucbvax!SKL-CRC.ARPA!symchych From: symchych@SKL-CRC.ARPA (Tim Symchych) Newsgroups: comp.protocols.tcp-ip Subject: Re: Linking LAN's via Public X.25 Message-ID: <8806022259.AA25446@skl-crc.arpa> Date: 2 Jun 88 22:59:43 GMT Sender: daemon@ucbvax.BERKELEY.EDU Organization: The Internet Lines: 81 Perhaps another view of IP over X.25 might help. While the original question was asked about Sun X.25, there are a number of networks within the ARPA/Internet that use IP over X.25. The experience that Phil Karn described at Bellcore does not describe our experience in establishing the DRENET and XDRENET here in Canada. There is no doubt in my mind that some implementations of packet switching using CCITT X.25 are poor. We did some testing of various networks about two years ago including ARPANET (1822 Links), Telenet in the USA, SATNET, and the public packet switching networks in the UK and Canada (Datapac). Packets were timed in some great loops, and by network segment to allow us to determine level of service over the various segments. Generally, the packets through Telenet and SATNET suffered the most delay. The X.75 gateways didn't seem to work all that well then either. I'm not sure that X.25 was designed for slow speed terminal multiplexing. NO packet switching network works well at that task. In our own case, use of TCP/IP over X.25 has been shown effective from both points of view: cost and throughput. In Canada, we have at least one first rate X.25 packet switching network. Through eight years of using Datapac for applications, including five years of TCP/IP over X.25, we have found no problems in service or reliability. By the way, I don't work for any of the Bell franchises, nor do I own any of their stock. I just pay the phone bill. We have several hosts and gateways that connect LANs using X.25. I don't agree that X.25 is overly complex, or managing virtual circuits is a big problem. Our gateways and hosts usually do the work in software on board level products. We did some preliminary testing with SunLink using our hosts and one of their software implementations. Everything worked but the Sun stuff has some minor rough edges. For instance, the Sun would only allow one virtual circuit between two hosts even thought our side would allow one per user process. Imagine sharing your TELNET session with an FTP with each IP packet waiting it turn in in the X.25 queue. TELNET over X.25 is bad, but Sun made it worse. I hope they worked on that "kludge". Our tariff structure for communication services is almost the reverse to that in the U.S. Our leased lines cost us dearly, but out packet switching cost much less. If you look at our telco infrastructure, its easy to see why. Our population is spread across the country, and leased lines are nearly always new services. In the way of cost and performance, we have this kind of experience: Rental of 9600 bps X.25 modem with 20 virtual circuits is $390.00 U.S. per month. Traffic charges range from $35.00 to $300.00 U.S. per month per X.25 interface depending on usage and distance. The later is for about 50MB per month to a site 1500 miles away using the Datapac cost formula. On most of our X.25 legs, we get between 50 and 75 per cent of the of the 9600 bps. But this does depend a great deal on the HOST implementation of TCP/IP. In contrast, our 9600 bps HDH line gives us about 80 to 85 per cent of the line speed. Our 56 kbps line gives about 20 to 30 percent of the line speed, but we are not sure that we have hosts at either end to drive it faster. To send a 10 MB file across the country, it would cost about $75.00 U.S.. Thats about the same as it would cost to run the data on to a mag new tape and send it FEDEX. Some of our sites will be running 19.2 bps X.25 service which will double the monthly modem cost, and increase throughput but will be limited by the TCP/IP on the hosts. We expect that these 19.2 links will get us about 12000 bps or just about what we get out of our very expensive 56k bps line between our core gateway and the U. of Rochester. When our core gateway was replaced in Feb/88 with a Butterfly, the measured throughput went up slightly even though the HDH interface was replaced with an X.25 interface. I've tried to figure that one out. I will be the first to agree that our needs and network structure is different than Bellcore. However, I view X.25 packet switches as a low cost backbone that will allow us to operate until we have sufficient traffic levels to support leased lines. If there is a target needing a thumper, it sounds like the implementation of X.25 in Telenet will do. When it comes to bad implementations of TCP/IP, the cry is "burn them at the stake". There are good and bad X.25 implementations, and I recommend burning where it is due.