Thanks to a documentary on Netflix and some Googling I now have the street address of the home of a Toynbee tiler who may be the genesis of an enigma I've been following for 30 years.
I could go sit on his porch now until he must come out and try to bond with him, but that seems an evil pushy thing to do even if we're both equally loony. I would if I must, but I don't see the need to make him suffer that.
It turns out he's a paranoid schizophrenic paranoid for a good reason (see the movie), and he doesn't want to engage the public. He just wants his meme known. He's been real creative and persistent about it, so I say lets give it to him. Lets validate his fantasy and explore the question he so desperately presents that he found a new way to engage us in his question, the tiles.
Let's have the public talk about Kubric's 2010, and how it references Toynbee's theories about molecular regeneration, and maybe resurrection. Toynbee's work was original and seminal to many other great works. Toynbee was an overlooked visionary genius. The Toynbee tiler was too, in a different way as he has forced us against our will to examine his premise through sheer persistence and force of will.
This needs must end with his participation in the discussion, but let's lure him out rather than forcing him out. Give him the respect he deserves for intriguing us for 30 years, and he just might speak and give us some insight into why he has teased us so.
Monday, August 27, 2012
Monday, February 06, 2012
Who Da Punk (Mini-msft) surrenders
The defeatist drumroll for Microsoft has overcome one of my favorite bloggers, "Who Da Punk" of Mini-MSFT fame.
Purportedly a senior manager of Microsoft posting an anonymous blog these last eight years, "Mini" has been an ineffectual - but insightful - proponent for change. His/her blog has also been a useful avenue for Microsoft insiders (and fakers purporting to be) to vent their anguish at the misdirection of the company, its processes and HR issues. It has also become a hater focus. The comments on prior posts are quite interesting and can give more insight into the Redmond giant's internal processes and history than they might like. They're still available.
But new ones are no more. In his latest post Mini makes it clear that he can bear the incessant depression no more. He's hanging up his insider geek hat and will not vet comments any more. After 8 years, he gives up. He may comment on the quarterlies, but he'll not allow comments until the blogging software (this very one!) allows user moderation.
That's the stated reason anyway. My guess is that Microsoft has upgraded their web intelligence and he's fearful of being found out.
Purportedly a senior manager of Microsoft posting an anonymous blog these last eight years, "Mini" has been an ineffectual - but insightful - proponent for change. His/her blog has also been a useful avenue for Microsoft insiders (and fakers purporting to be) to vent their anguish at the misdirection of the company, its processes and HR issues. It has also become a hater focus. The comments on prior posts are quite interesting and can give more insight into the Redmond giant's internal processes and history than they might like. They're still available.
But new ones are no more. In his latest post Mini makes it clear that he can bear the incessant depression no more. He's hanging up his insider geek hat and will not vet comments any more. After 8 years, he gives up. He may comment on the quarterlies, but he'll not allow comments until the blogging software (this very one!) allows user moderation.
That's the stated reason anyway. My guess is that Microsoft has upgraded their web intelligence and he's fearful of being found out.
Tuesday, February 22, 2011
This blog was a bad idea
I'm far too busy to post in this blog. I comment already on the issues of the day at slashdot, so this blog was only about longer articles and personal memes that didn't happen to come up regularly there. Maintaining a pseudonym blog when I'm also posting in my right name has become a burden. Soon I think I might pierce the veil and connect my symbolset handle with my own self. It's a dangerous thing - I've made some bold posts, predicted the future, insulted some folks. I may even have said some actionable things. But it might be interesting to see if the world can handle folk who can speak their own mind.
Even if it worked out ok I would miss the handle thing, and frankly the symbolset persona isn't my professional self. Perhaps I'm a bit Schizophrenic. I would continue to operate it even after it was transparent.
Even if it worked out ok I would miss the handle thing, and frankly the symbolset persona isn't my professional self. Perhaps I'm a bit Schizophrenic. I would continue to operate it even after it was transparent.
Thursday, September 30, 2010
Saturday, February 28, 2009
HP Systems Insight Manager simplified
What is HP server management software?
Modern servers for large organizations come in racks. These racks can have 5 extremely powerful and expandable servers, 42 of the thinnest servers, or even up to 128 server blades. When these computers are set up and while they're running there's no keyboard, monitor or mouse connected to them. The physical installation and the software installation and configuration are handled by completely different people. An essential piece that makes this work is a built-in system manager.
For HP servers the built in system manager is called Integrated Lights Out, or ILO. It's a dedicated computer inside the server that works on standby power, so it's on even when the server is off. It's accessible through its own network port or a shared one. It has access to all of the server's health monitoring systems, the keyboard mouse USB and video, and can even flash the BIOS. Older servers have basic ILO. More recent servers will have ILO2. Both versions of ILO have the potential to be improved with a license key in the CMOS settings that enables more advanced features like graphics video and virtual media.
To make working with many thousands of systems manageable, you need a coordinated system that allows you to monitor and perform operations on servers, groups of servers, or entire datacenters. That's where HP System Insight Manager comes in.
Systems Insight Manager (SIM) consists of a web server, database and set of utilities. Although HP doesn't make a strong point of it, this type of service is called a Content Management System (CMS). The web server provides a single integrated viewpoint of all the servers. It does this by presenting a visual representation of the servers themselves. It can detect servers on the network, or you can tell it where they are. Once they're configured, servers are available in the interface, monitored continuously and managed. You can actually look at the picture of the rack on the web server and see the lights blinking. It's integrated with the built in management hardware of the servers, so by selecting a server you can perform many different options. You can power the server on and off, turn on the service ID blue light, configure BIOS settings, flash the BIOS or even install an operating system. Using the remote console you can watch the machine boot as if you were in front of it with a keyboard mouse and monitor, and use whatever graphical interface you install too, so configuring Windows or Linux doesn't require a trip to the server room. These features work outside the operating system using dedicated hardware inside the server that runs on standby power, so it works even when the server is turned off. Because it's web based, it can be made available anywhere on your network or or anywhere in the world. Systems Insight Manager runs on its own server, and can be downloaded for free.
HP offers some software packages for sale, and provides some with each Proliant server. These software packages help in running, managing and configuring an individual server. They all also plug into Systems Insight Manager, enabling various features from the higher level view.
The Proliant Support Pack that you get with each server includes a suite of drivers and software for supported operating systems, including a service called the System Management Home page. This service runs in the operating system, presents a web based interface, and allows you to access all of the built in system monitors and the system management hardware in each server.
Systems Insight Manager detects this interface and installs a button to monitor each system's System Management home page. Included in the Proliant Support Pack is also a CD package called "Smart Start" that allows for the remote installation of an operating system for a single server, and it has a scripting toolkit for scripted installations. There's also an array configuration utility (ACU) that allows you to configure locally attached hard drives using HP RAID controllers. A diagnostic suite is also included, which allows you to perform certain diagnostic tests outside of the operating system (offline edition) or inside of it (online edition).
Insight Control Environment is available at additional cost. It includes all of these modules, some of which are also available separately:
Rapid Deployment Pack allows you to build and configure system images and stream them to individual servers or groups of servers.
Virtual Machine Management Pack assists with virtual machines.
Vulnerability and Patch Management lets you set up repositories for patches and automatically deploy them.
Insight Power Manager allows you to monitor and control power usage per server and by groups of servers.
In addition to HP servers the the Systems Insight Manager can also monitor other devices that use SNMP, the (formerly) Simple Network Management Protocol, which is supported by almost all modern network devices.
Other vendors have similar systems for managing their servers. These power tools for server administration help reduce costs and enable fewer server administrators to manage far more servers.
Modern servers for large organizations come in racks. These racks can have 5 extremely powerful and expandable servers, 42 of the thinnest servers, or even up to 128 server blades. When these computers are set up and while they're running there's no keyboard, monitor or mouse connected to them. The physical installation and the software installation and configuration are handled by completely different people. An essential piece that makes this work is a built-in system manager.
For HP servers the built in system manager is called Integrated Lights Out, or ILO. It's a dedicated computer inside the server that works on standby power, so it's on even when the server is off. It's accessible through its own network port or a shared one. It has access to all of the server's health monitoring systems, the keyboard mouse USB and video, and can even flash the BIOS. Older servers have basic ILO. More recent servers will have ILO2. Both versions of ILO have the potential to be improved with a license key in the CMOS settings that enables more advanced features like graphics video and virtual media.
To make working with many thousands of systems manageable, you need a coordinated system that allows you to monitor and perform operations on servers, groups of servers, or entire datacenters. That's where HP System Insight Manager comes in.
Systems Insight Manager (SIM) consists of a web server, database and set of utilities. Although HP doesn't make a strong point of it, this type of service is called a Content Management System (CMS). The web server provides a single integrated viewpoint of all the servers. It does this by presenting a visual representation of the servers themselves. It can detect servers on the network, or you can tell it where they are. Once they're configured, servers are available in the interface, monitored continuously and managed. You can actually look at the picture of the rack on the web server and see the lights blinking. It's integrated with the built in management hardware of the servers, so by selecting a server you can perform many different options. You can power the server on and off, turn on the service ID blue light, configure BIOS settings, flash the BIOS or even install an operating system. Using the remote console you can watch the machine boot as if you were in front of it with a keyboard mouse and monitor, and use whatever graphical interface you install too, so configuring Windows or Linux doesn't require a trip to the server room. These features work outside the operating system using dedicated hardware inside the server that runs on standby power, so it works even when the server is turned off. Because it's web based, it can be made available anywhere on your network or or anywhere in the world. Systems Insight Manager runs on its own server, and can be downloaded for free.
HP offers some software packages for sale, and provides some with each Proliant server. These software packages help in running, managing and configuring an individual server. They all also plug into Systems Insight Manager, enabling various features from the higher level view.
The Proliant Support Pack that you get with each server includes a suite of drivers and software for supported operating systems, including a service called the System Management Home page. This service runs in the operating system, presents a web based interface, and allows you to access all of the built in system monitors and the system management hardware in each server.
Systems Insight Manager detects this interface and installs a button to monitor each system's System Management home page. Included in the Proliant Support Pack is also a CD package called "Smart Start" that allows for the remote installation of an operating system for a single server, and it has a scripting toolkit for scripted installations. There's also an array configuration utility (ACU) that allows you to configure locally attached hard drives using HP RAID controllers. A diagnostic suite is also included, which allows you to perform certain diagnostic tests outside of the operating system (offline edition) or inside of it (online edition).
Insight Control Environment is available at additional cost. It includes all of these modules, some of which are also available separately:
Rapid Deployment Pack allows you to build and configure system images and stream them to individual servers or groups of servers.
Virtual Machine Management Pack assists with virtual machines.
Vulnerability and Patch Management lets you set up repositories for patches and automatically deploy them.
Insight Power Manager allows you to monitor and control power usage per server and by groups of servers.
In addition to HP servers the the Systems Insight Manager can also monitor other devices that use SNMP, the (formerly) Simple Network Management Protocol, which is supported by almost all modern network devices.
Other vendors have similar systems for managing their servers. These power tools for server administration help reduce costs and enable fewer server administrators to manage far more servers.
Wednesday, February 11, 2009
Layer 2 networking - Simplified - First in series
Layer two is the layer of the network that lies above the physical cables, but below Internet Protocols and other session based protocols. Understanding layer two helps with diagnosing problems that might occur between your PC and the router that takes your communications off of your local network and into the greater intranet or the Internet.
In this first installment I'm going to cover some basic terms and describe the basic equipment. In the second installment we'll go over an example network and hopefully tie together how the things work. If I get to a third segment I should be able to step up to the next level of our network model and tie some networks together.
The real purpose of this article isn't to educate you - it's to cement these ideas in my mind in a way that is accessible to people I talk to on a daily basis. If you find this useful you're welcome to copy it in any way you like. I hereby dedicate it to the public domain.
Terminology
Technology
The addressing scheme
Reliably unreliable
The packet
Your network card
The Hub, extender and bridge
The switch
VLANs
QOS
Trunking
Routers and other gateways
Terminology
Layer 2 refers to the second layer of the 7 layer OSI networking model. Although there are other models that describe network architecture the OSI model is the accepted standard for most people. Layer two is the level that defines a "network". Below this level are devices and media. Above this level are internets and intranets. This topic is a network. We'll cover for completeness virtual networks and touch on routing between virtual networks because these are issues that are dealt with on this OSI networking level.
IEEE 802.3 is the name of the working group that invented Ethernet and documented these standards which are still in use today, though most of the technologies were first invented by Robert Metcalf.
IEEE 802.11 is the name of the working group that adopts standards for wireless Ethernet.
There are other ways to do networking than Ethernet. They're all odd and/or dead, so I won't cover them here.
An octet is 8 bits. The term byte is technically the size of word that the information processing system can handle, but let's not be pedantic. For the purpose of this article a byte is an octet is 8 bits and is represented by eight binary digits, two hexadecimal digits or a value from 0-255.
Packets, datagrams and frames are not quite the same things. Despite this the terms here will be used interchangeably to refer both to the information being passed (data) and the control information that describes that information and how to get it to where it's going (header). The purpose of this is to make the information more accessible. If you can't deal with this please cite somebody else. Communication is not well served by excess precision.
Technology
We'll be discussing wired Ethernet over copper. For most of the material wireless networking is similar, but hopefully I'll find time to write them up in detail another time. For now the problem is big enough so I'll stick with wired networks using Cat 5e or better media. Fiber is an important part of modern networking, but fiber networking at layer two is similar enough that I can probably avoid discussing the differences. There are other ways to do networking, but either they're of historical interest or special purpose use only.
The network under discussion here will be only a single Local Area Network and the discussion will end at the first router we come to. Once a router re-addresses your data, it's no longer on the same network and passes beyond this topic. The only exception is when we get to VLANs, for which a cursory discussion of routing is necessary, since VLANs are common parts of modern networks and appropriate for discussion at layer 2.
The addressing scheme
For layer two ethernet we have a special sublayer, the Media Access Control or MAC layer, that deals with addressing. The rules are pretty simple. A MAC address uniquely identifies a particular access device that will receive packets. A MAC address is typically 6 bytes, or 48 bits. MAC addresses are usually written as pairs of hexadecimal digits, called out in the order of transmission, such as 01-02-03-0a-0b-0c or 01:02:03:0a:0b:0c. In both of these cases the first half is referred to as the organizationally unique identifier (OUI) and the second half is the network interface controller (NIC) specific ID. The original purpose for this was to allow for specific network controller vendors to identify their products in the MAC address and still leave a way for each NIC on a LAN to have a unique ID. Since this is 30 years later, you can probably anticipate that we've run out of numbers and individual vendors have multiple OUIs, and MACs are no longer deliberately unique. That's OK, though, because these days MAC address is a configurable part of the NIC and so if you have two with the same number (address collision) you can fix it.
Reliably unreliable
It's counter-intuitive, but it works. On the ethernet at layer 2, the system is deliberately unreliable. There is no error detection or correction mechanism. The ethernet delivers packets on a "best effort" basis. Unexpected packets received on a port are ignored. Packets to unknown hosts are just discarded. At Layer 4 we get systems that handle detecting if communication was successful, but the equipment at layer 2 literally doesn't care. Reliable methods have been tried, but failed to keep up with the speed of Ethernet and were ultimately discarded or pushed into specialized applications.
The packet
The packet consist of a header and your data. If you have no interest in programming or network analysis you can safely skip the rest of this part. The header consists of a preamble that identifies the packet as an ethernet frame. It's 7 bytes with the value 10101010. This is the signal that lets the receiver know there's data coming down the wire. It's followed by the start of frame indicator, which is a single byte with the value 10101011. Then comes the MAC destination address, then the MAC source address. The next field is rather tricky.
An optional field, the 802.11Q field, goes here. If present, the first two bytes are 0x8100, because this value is an invalid value for other packet formats. This is called the Tag Protocol Identifier (TPID). If this value is present then an additional field called the Tag Protocol Identifier (TPI) of two bytes will follow. The TPI identifies VLAN and QOS, and will be described later.
Next comes the Ethertype field. This is two bytes. For 802.3 ethernet this is the length of the packet and valid values are between 64 and 1522 bytes.
Next comes the data, which can be between 46 and 1500 bytes.
Last comes a 4 byte field, which represents the result of performing an error detection algorithm called CRC-32 on the rest of the packet. The sender computes this value when sending the packet and adds it to the end. When the receiver performs the calculation on the received packet it's exceedingly unlikely that the computation will result in the CRC data unless the packet was transmitted correctly.
And that's all. If a packet is less than 64 bytes, which isn't allowed given the required data above, it's called a runt and discarded.
Your network card
I'm writing this in February of 2009. Current technology is gigabit ethernet, which is probably the same capacity you plug it into. Your network interface controller (NIC) allows your computer to connect to the physical cable and communicate with the network. If you have a laptop it almost certainly has a network port you can connect to the network. Network administration is a vast and variable field. Some networks only allow connections by known systems, or have other restrictions. We are not going to cover those issues here. It's assumed the typical permissive network found in business or homes is used.
Your NIC plugs into a wire with four twisted pairs of wires, and from there to a switch, possibly with a wall jack and premises wiring in between. Since premise wiring is just simple copper connections that extend the wires we'll ignore them here.
Until your NIC has a physical connection to another device and they've worked out between them out to communicate, you're not "on the network".
Your NIC or your switch or both might only be capable of 100 million bits per second (100Mbps, or fast ethernet). You might be connected directly to another PC's network card directly, which is "technically" a network, but we won't discussed this odd case. Whether you need normal cable, called a "patch" or "straightthrough" cable, or a special cable that reverses the send and receive signals called a "crossover" cable depends on a number of factors. Most NICs and switches these days have a feature called "Auto MDI-X" that straightens out these issues. Switches and network cards can also discover between them which speed each supports and automatically use this speed. The only trap here is that the cable standards for modern networking are very strict. If both the sender and receiver are capable of faster communication than the wire between them, they will suffer a horrible connection. If this happens to you, throw out the old cable and get a new one. They're cheap.
Almost all computers these days come with at least one gigabit ethernet port, but they're not all the same. A high end ethernet controller is a microcomputer in itself and handles almost all aspects of the communication. Built in controllers often use the processor to calculate the checksum and for various other things, and system memory to hold the packets during processing. Built in controllers are getting better these days though and processors are powerful enough to handle this so you don't have to worry about that too much unless your needs are pretty extreme - and then you wouldn't be reading this anyway.
Now look: gigabit isn't currently the top of the networking food chain. It's not even close. Unlike other IT infrastructure, networking usually progresses by 10s. The previous generation was 100 million bits per second. The current standard is 1 billion bits per second, or 1 gigabit. 10 gigabit ethernet is now widely available, and 100 gigabit is in development. There are bizarre unrelated networking protocols like Infiniband. You don't need to worry about that right now. Today gigabit ethernet is where it's at, and it's more than enough for most of the stuff you want to do if you're my target audience.
The link
The link is shorthand for the successfully connected physical medium that data passes over.
The Hub, extender
These devices are historical oddities. If you find one, throw it away and replace it with a switch. If you don't know what these are, don't worry. You don't need to know about this. You don't want to try it.
The switch
Although some people are trying to get this named a "network bridge" its common name is "switch". This is the key piece of equipment we'll be talking about. Switches come in many varieties and capabilities and can cost more than a half million dollars on the high end or less than 50 dollars on the low. Some switches are capable of performing "routing" at OSI model layer 3, but we won't discuss this here - we'll only consider layer 2 switching, which all switches use. The switch receives the packet from your NIC. If the NIC in the destination address is directly connected to the switch, the switch forwards it out directly to that NIC only. If the destination address isn't directly connected then a couple of things can happen. If the switch has a layer 2 routing facility like "Spanning Tree Protocol" and is connected to a similarly equipped switch, then it can know which port on the switch to forward the packet through and send it through that. Otherwise the switch forwards the packet out all of its ports except the one it was received on, or drops it depending on the switch configuration.
Managed switch
An unmanaged switch doesn't do QOS. It doesn't do VLANs. It probably doesn't do spanning tree. It doesn't have storable and recoverable configurations. Since managed switches start at under $200 for an 8 port gigabit switch these days, get a managed switch unless you know why you don't need one.
VLANs
Earlier we discussed the 802.11q part of the packet header. In addition to QOS this field has 12 bits to designate the "virtual local area network". When both ends of a link are capable of 802.11q, and are configured to use it, up to 1024 VLANs are possible. In practice not all switches are capable of using any, and some only support a limited number. In most cases only servers access more than one VLAN on a single link.
So what's a VLAN? In as much as a LAN is a physical network, a "virtual lan" is some subset of the physical network. By applying a number to the VLAN it's possible to do a number of useful things. You can separate communication between servers and equipment based on role, and change the relationships in the switch software without rerouting the physical wires in the walls. This allows the network administrator to assign the accounting department to their own network, for example, so that the sales department can't inadvertently access PCs in the accounting department. They can also screw up this configuration so that an attentive user can access all VLANs by leaving all VLANs and QOS configured on the user's port by default.
A port on a switch can be dedicated to a particular VLAN, and then all traffic received on that port from the end user will belong to that VLAN. If the person at that network port moves to another desk on another floor, it's possible to restrict his access only to the network resources that are appropriate for him. Inside the network the VLANs share physical links, but switches will not pass information from one VLAN to another. In order to get a packet from one VLAN to another, a router is required.
One trick with VLANs is that you can have a two sets of switches that support, say, VLAN 11, with unmanaged switches or switches or ports configured to not pass VLAN 11 between them. In this case these two VLANs, though they share a VLAN number and physical connections, are isolated from each other. Spanning tree protocol can wind up blocking the transfer of packets on a particular VLAN if configured incorrectly.
In addition, a LAN is a broadcast domain. Layer 2 networking contains a facility for sending one packet to all receivers on all ports on all switches on that network. Having too many users in a broadcast domain increases the likelihood one of them will go crazy and create a "broadcast storm". By segregating subsets of customers in VLANs, it's possible to limit the scope of such a malfunction.
QOS
QOS is about traffic priority. If you're doing VOIP or streaming video on your network and you require a connection that doesn't stutter then you probably need QOS.
One problem we get into here is that the QOS standard for networking, 802.1p, is differently implemented by various networking equipment vendors. They've all got whiz-bang features that justify their proprietary features. After all, the standard is only 15 years old. It specifies 8 priority "bins". How it's implemented is not specified and left to implementation.
Most switching equipment vendors allow users to prefer a minimum percentage of a link to a particular bin. Then if no traffic is in that bin the bandwidth is allowed for other traffic, but if a stream occurs on a link then it's permitted to consume up to the minimum percentage without hindrance by other traffic on the line. When the communication passes through a link that doesn't support this, the tags are lost, so QOS delivery is limited to the segments of the network that directly support it.
How you would use it is home, for example, is that you have a switch that supports QOS, a video server with your home movies, and a mythTV box that you watch movies on. Naturally if your spouse is downstairs downloading remastering the video on your file server of the family Christmas event you don't want that to degrade your viewing experience of Office Space. So you configure the video server with a QOS of 2 on the your Video VLAN, VLAN 90. Then you tell the gigabit switch that the port to your mythTV box is VLAN 90 and that the QOS for bin is 20%. Magically your mythTV box has a minimum of 20% of its link for video. This oversimplified example skips the part where you need at least two switches before this is useful.
Trunking
This is more of a business thing. There are two types of "trunking". The first is where you use one link to pass multiple VLANs. The second is where you use multiple individual links between two switches to increase the bandwidth between them. We're not going to worry about this right now.
Routers and other gateways
When traffic leaves the LAN it must pass through a gateway to an off-network device or network. For the purposes of this topic a router or gateway is just another computer. When we get to connecting VLANs together I'll cover this a little bit, but not a lot.
The main discussion.
Whew! That was a lot of background. I don't know about you, but I'm glad it's over. Let's do some network engineering now in another post.
In this first installment I'm going to cover some basic terms and describe the basic equipment. In the second installment we'll go over an example network and hopefully tie together how the things work. If I get to a third segment I should be able to step up to the next level of our network model and tie some networks together.
The real purpose of this article isn't to educate you - it's to cement these ideas in my mind in a way that is accessible to people I talk to on a daily basis. If you find this useful you're welcome to copy it in any way you like. I hereby dedicate it to the public domain.
Terminology
Technology
The addressing scheme
Reliably unreliable
The packet
Your network card
The Hub, extender and bridge
The switch
VLANs
QOS
Trunking
Routers and other gateways
Terminology
Layer 2 refers to the second layer of the 7 layer OSI networking model. Although there are other models that describe network architecture the OSI model is the accepted standard for most people. Layer two is the level that defines a "network". Below this level are devices and media. Above this level are internets and intranets. This topic is a network. We'll cover for completeness virtual networks and touch on routing between virtual networks because these are issues that are dealt with on this OSI networking level.
IEEE 802.3 is the name of the working group that invented Ethernet and documented these standards which are still in use today, though most of the technologies were first invented by Robert Metcalf.
IEEE 802.11 is the name of the working group that adopts standards for wireless Ethernet.
There are other ways to do networking than Ethernet. They're all odd and/or dead, so I won't cover them here.
An octet is 8 bits. The term byte is technically the size of word that the information processing system can handle, but let's not be pedantic. For the purpose of this article a byte is an octet is 8 bits and is represented by eight binary digits, two hexadecimal digits or a value from 0-255.
Packets, datagrams and frames are not quite the same things. Despite this the terms here will be used interchangeably to refer both to the information being passed (data) and the control information that describes that information and how to get it to where it's going (header). The purpose of this is to make the information more accessible. If you can't deal with this please cite somebody else. Communication is not well served by excess precision.
Technology
We'll be discussing wired Ethernet over copper. For most of the material wireless networking is similar, but hopefully I'll find time to write them up in detail another time. For now the problem is big enough so I'll stick with wired networks using Cat 5e or better media. Fiber is an important part of modern networking, but fiber networking at layer two is similar enough that I can probably avoid discussing the differences. There are other ways to do networking, but either they're of historical interest or special purpose use only.
The network under discussion here will be only a single Local Area Network and the discussion will end at the first router we come to. Once a router re-addresses your data, it's no longer on the same network and passes beyond this topic. The only exception is when we get to VLANs, for which a cursory discussion of routing is necessary, since VLANs are common parts of modern networks and appropriate for discussion at layer 2.
The addressing scheme
For layer two ethernet we have a special sublayer, the Media Access Control or MAC layer, that deals with addressing. The rules are pretty simple. A MAC address uniquely identifies a particular access device that will receive packets. A MAC address is typically 6 bytes, or 48 bits. MAC addresses are usually written as pairs of hexadecimal digits, called out in the order of transmission, such as 01-02-03-0a-0b-0c or 01:02:03:0a:0b:0c. In both of these cases the first half is referred to as the organizationally unique identifier (OUI) and the second half is the network interface controller (NIC) specific ID. The original purpose for this was to allow for specific network controller vendors to identify their products in the MAC address and still leave a way for each NIC on a LAN to have a unique ID. Since this is 30 years later, you can probably anticipate that we've run out of numbers and individual vendors have multiple OUIs, and MACs are no longer deliberately unique. That's OK, though, because these days MAC address is a configurable part of the NIC and so if you have two with the same number (address collision) you can fix it.
Reliably unreliable
It's counter-intuitive, but it works. On the ethernet at layer 2, the system is deliberately unreliable. There is no error detection or correction mechanism. The ethernet delivers packets on a "best effort" basis. Unexpected packets received on a port are ignored. Packets to unknown hosts are just discarded. At Layer 4 we get systems that handle detecting if communication was successful, but the equipment at layer 2 literally doesn't care. Reliable methods have been tried, but failed to keep up with the speed of Ethernet and were ultimately discarded or pushed into specialized applications.
The packet
The packet consist of a header and your data. If you have no interest in programming or network analysis you can safely skip the rest of this part. The header consists of a preamble that identifies the packet as an ethernet frame. It's 7 bytes with the value 10101010. This is the signal that lets the receiver know there's data coming down the wire. It's followed by the start of frame indicator, which is a single byte with the value 10101011. Then comes the MAC destination address, then the MAC source address. The next field is rather tricky.
An optional field, the 802.11Q field, goes here. If present, the first two bytes are 0x8100, because this value is an invalid value for other packet formats. This is called the Tag Protocol Identifier (TPID). If this value is present then an additional field called the Tag Protocol Identifier (TPI) of two bytes will follow. The TPI identifies VLAN and QOS, and will be described later.
Next comes the Ethertype field. This is two bytes. For 802.3 ethernet this is the length of the packet and valid values are between 64 and 1522 bytes.
Next comes the data, which can be between 46 and 1500 bytes.
Last comes a 4 byte field, which represents the result of performing an error detection algorithm called CRC-32 on the rest of the packet. The sender computes this value when sending the packet and adds it to the end. When the receiver performs the calculation on the received packet it's exceedingly unlikely that the computation will result in the CRC data unless the packet was transmitted correctly.
And that's all. If a packet is less than 64 bytes, which isn't allowed given the required data above, it's called a runt and discarded.
Your network card
I'm writing this in February of 2009. Current technology is gigabit ethernet, which is probably the same capacity you plug it into. Your network interface controller (NIC) allows your computer to connect to the physical cable and communicate with the network. If you have a laptop it almost certainly has a network port you can connect to the network. Network administration is a vast and variable field. Some networks only allow connections by known systems, or have other restrictions. We are not going to cover those issues here. It's assumed the typical permissive network found in business or homes is used.
Your NIC plugs into a wire with four twisted pairs of wires, and from there to a switch, possibly with a wall jack and premises wiring in between. Since premise wiring is just simple copper connections that extend the wires we'll ignore them here.
Until your NIC has a physical connection to another device and they've worked out between them out to communicate, you're not "on the network".
Your NIC or your switch or both might only be capable of 100 million bits per second (100Mbps, or fast ethernet). You might be connected directly to another PC's network card directly, which is "technically" a network, but we won't discussed this odd case. Whether you need normal cable, called a "patch" or "straightthrough" cable, or a special cable that reverses the send and receive signals called a "crossover" cable depends on a number of factors. Most NICs and switches these days have a feature called "Auto MDI-X" that straightens out these issues. Switches and network cards can also discover between them which speed each supports and automatically use this speed. The only trap here is that the cable standards for modern networking are very strict. If both the sender and receiver are capable of faster communication than the wire between them, they will suffer a horrible connection. If this happens to you, throw out the old cable and get a new one. They're cheap.
Almost all computers these days come with at least one gigabit ethernet port, but they're not all the same. A high end ethernet controller is a microcomputer in itself and handles almost all aspects of the communication. Built in controllers often use the processor to calculate the checksum and for various other things, and system memory to hold the packets during processing. Built in controllers are getting better these days though and processors are powerful enough to handle this so you don't have to worry about that too much unless your needs are pretty extreme - and then you wouldn't be reading this anyway.
Now look: gigabit isn't currently the top of the networking food chain. It's not even close. Unlike other IT infrastructure, networking usually progresses by 10s. The previous generation was 100 million bits per second. The current standard is 1 billion bits per second, or 1 gigabit. 10 gigabit ethernet is now widely available, and 100 gigabit is in development. There are bizarre unrelated networking protocols like Infiniband. You don't need to worry about that right now. Today gigabit ethernet is where it's at, and it's more than enough for most of the stuff you want to do if you're my target audience.
The link
The link is shorthand for the successfully connected physical medium that data passes over.
The Hub, extender
These devices are historical oddities. If you find one, throw it away and replace it with a switch. If you don't know what these are, don't worry. You don't need to know about this. You don't want to try it.
The switch
Although some people are trying to get this named a "network bridge" its common name is "switch". This is the key piece of equipment we'll be talking about. Switches come in many varieties and capabilities and can cost more than a half million dollars on the high end or less than 50 dollars on the low. Some switches are capable of performing "routing" at OSI model layer 3, but we won't discuss this here - we'll only consider layer 2 switching, which all switches use. The switch receives the packet from your NIC. If the NIC in the destination address is directly connected to the switch, the switch forwards it out directly to that NIC only. If the destination address isn't directly connected then a couple of things can happen. If the switch has a layer 2 routing facility like "Spanning Tree Protocol" and is connected to a similarly equipped switch, then it can know which port on the switch to forward the packet through and send it through that. Otherwise the switch forwards the packet out all of its ports except the one it was received on, or drops it depending on the switch configuration.
Managed switch
An unmanaged switch doesn't do QOS. It doesn't do VLANs. It probably doesn't do spanning tree. It doesn't have storable and recoverable configurations. Since managed switches start at under $200 for an 8 port gigabit switch these days, get a managed switch unless you know why you don't need one.
VLANs
Earlier we discussed the 802.11q part of the packet header. In addition to QOS this field has 12 bits to designate the "virtual local area network". When both ends of a link are capable of 802.11q, and are configured to use it, up to 1024 VLANs are possible. In practice not all switches are capable of using any, and some only support a limited number. In most cases only servers access more than one VLAN on a single link.
So what's a VLAN? In as much as a LAN is a physical network, a "virtual lan" is some subset of the physical network. By applying a number to the VLAN it's possible to do a number of useful things. You can separate communication between servers and equipment based on role, and change the relationships in the switch software without rerouting the physical wires in the walls. This allows the network administrator to assign the accounting department to their own network, for example, so that the sales department can't inadvertently access PCs in the accounting department. They can also screw up this configuration so that an attentive user can access all VLANs by leaving all VLANs and QOS configured on the user's port by default.
A port on a switch can be dedicated to a particular VLAN, and then all traffic received on that port from the end user will belong to that VLAN. If the person at that network port moves to another desk on another floor, it's possible to restrict his access only to the network resources that are appropriate for him. Inside the network the VLANs share physical links, but switches will not pass information from one VLAN to another. In order to get a packet from one VLAN to another, a router is required.
One trick with VLANs is that you can have a two sets of switches that support, say, VLAN 11, with unmanaged switches or switches or ports configured to not pass VLAN 11 between them. In this case these two VLANs, though they share a VLAN number and physical connections, are isolated from each other. Spanning tree protocol can wind up blocking the transfer of packets on a particular VLAN if configured incorrectly.
In addition, a LAN is a broadcast domain. Layer 2 networking contains a facility for sending one packet to all receivers on all ports on all switches on that network. Having too many users in a broadcast domain increases the likelihood one of them will go crazy and create a "broadcast storm". By segregating subsets of customers in VLANs, it's possible to limit the scope of such a malfunction.
QOS
QOS is about traffic priority. If you're doing VOIP or streaming video on your network and you require a connection that doesn't stutter then you probably need QOS.
One problem we get into here is that the QOS standard for networking, 802.1p, is differently implemented by various networking equipment vendors. They've all got whiz-bang features that justify their proprietary features. After all, the standard is only 15 years old. It specifies 8 priority "bins". How it's implemented is not specified and left to implementation.
Most switching equipment vendors allow users to prefer a minimum percentage of a link to a particular bin. Then if no traffic is in that bin the bandwidth is allowed for other traffic, but if a stream occurs on a link then it's permitted to consume up to the minimum percentage without hindrance by other traffic on the line. When the communication passes through a link that doesn't support this, the tags are lost, so QOS delivery is limited to the segments of the network that directly support it.
How you would use it is home, for example, is that you have a switch that supports QOS, a video server with your home movies, and a mythTV box that you watch movies on. Naturally if your spouse is downstairs downloading remastering the video on your file server of the family Christmas event you don't want that to degrade your viewing experience of Office Space. So you configure the video server with a QOS of 2 on the your Video VLAN, VLAN 90. Then you tell the gigabit switch that the port to your mythTV box is VLAN 90 and that the QOS for bin is 20%. Magically your mythTV box has a minimum of 20% of its link for video. This oversimplified example skips the part where you need at least two switches before this is useful.
Trunking
This is more of a business thing. There are two types of "trunking". The first is where you use one link to pass multiple VLANs. The second is where you use multiple individual links between two switches to increase the bandwidth between them. We're not going to worry about this right now.
Routers and other gateways
When traffic leaves the LAN it must pass through a gateway to an off-network device or network. For the purposes of this topic a router or gateway is just another computer. When we get to connecting VLANs together I'll cover this a little bit, but not a lot.
The main discussion.
Whew! That was a lot of background. I don't know about you, but I'm glad it's over. Let's do some network engineering now in another post.
Sunday, July 13, 2008
Linux GIS
For some time now I've been interested in Geographic Information Systems (GIS) for Linux. This is a natural combination since there is a huge amount of freely available geographic data available for free from the US government. GIS systems take datapoints, usually as geographic coordinates (longitude, latitude and elevation) and by associating various data (stream surveys, street plans, etc.) give a graphical representation that's very flexible. It's helpful to use them to make maps or visualize different elements.
There is a project with a long history in Linux that does this -- it's called GRASS. It was chosen for three projects in the 2008 Google Summer of Code. It's an active project with a long history and many users so it's likely to be around for quite a while longer, and it's licensed under the GNU GPL so price isn't an issue.
GRASS is pretty feature rich. GIS systems are always complex beasts as the various methods of storing, converting and visualizing geographic data are all rich fields with long histories and good fields for varying preferences. This system allows GIS data to be stored in any of the common databases including MS Access, MySQL, PostgrSQL, MS-SQL Server, Oracle, dBASE and others as well as various common formats or flat files. It can use live files created for and by ESRI's ArcGIS, which is the commonest commercial GIS program.
With the next version of GRASS a native Windows build will be available. For now the Windows version of the application is built under Cygwin.
Like many GPL licensed applications, GRASS has been included in a number of packages called distributions that include many complimentary applications that target an audience with a complete suite of applications and related tools that suit a common purpose, along with the Linux operating system and all of the usual applications as well. ArcheOS is an example of one that's targeted to archeologists that provides GRASS and related tools as well as a rich set of new toys to play with. I'll be using ArcheOS to set up a workstation system with GRASS. As of the current version (2.0.0) ArcheOS comes as a 1.2GB .iso file to burn to DVD for live DVD use or to install and includes version 6.2.3 (the most current stable release) of GRASS.
Anyway, give GRASS a try and tell me what you think.
There is a project with a long history in Linux that does this -- it's called GRASS. It was chosen for three projects in the 2008 Google Summer of Code. It's an active project with a long history and many users so it's likely to be around for quite a while longer, and it's licensed under the GNU GPL so price isn't an issue.
GRASS is pretty feature rich. GIS systems are always complex beasts as the various methods of storing, converting and visualizing geographic data are all rich fields with long histories and good fields for varying preferences. This system allows GIS data to be stored in any of the common databases including MS Access, MySQL, PostgrSQL, MS-SQL Server, Oracle, dBASE and others as well as various common formats or flat files. It can use live files created for and by ESRI's ArcGIS, which is the commonest commercial GIS program.
With the next version of GRASS a native Windows build will be available. For now the Windows version of the application is built under Cygwin.
Like many GPL licensed applications, GRASS has been included in a number of packages called distributions that include many complimentary applications that target an audience with a complete suite of applications and related tools that suit a common purpose, along with the Linux operating system and all of the usual applications as well. ArcheOS is an example of one that's targeted to archeologists that provides GRASS and related tools as well as a rich set of new toys to play with. I'll be using ArcheOS to set up a workstation system with GRASS. As of the current version (2.0.0) ArcheOS comes as a 1.2GB .iso file to burn to DVD for live DVD use or to install and includes version 6.2.3 (the most current stable release) of GRASS.
Anyway, give GRASS a try and tell me what you think.
Subscribe to:
Posts (Atom)