tag:blogger.com,1999:blog-335066882024-03-05T21:20:37.415-08:00symbolsetWords on a pageUnknownnoreply@blogger.comBlogger28125tag:blogger.com,1999:blog-33506688.post-86770814012608841092012-08-27T01:59:00.000-07:002012-08-27T02:00:23.557-07:00Thanks to <a href="http://movies.netflix.com/Movie/Resurrect-Dead-The-Mystery-of-the-Toynbee-Tiles/70170048">a documentary on Netflix</a> and some Googling I now have the street address of the home of a Toynbee tiler who may be the genesis of an enigma I've been following for 30 years.
<br />
I could go sit on his porch now until he must come out and try to bond with him, but that seems an evil pushy thing to do even if we're both equally loony. I would if I must, but I don't see the need to make him suffer that.
<br />
It turns out he's a paranoid schizophrenic paranoid for a good reason (see the movie), and he doesn't want to engage the public. He just wants his meme known. He's been real creative and persistent about it, so I say lets give it to him. Lets validate his fantasy and explore the question he so desperately presents that he found a new way to engage us in his question, the tiles.
<br />
Let's have the public talk about Kubric's 2010, and how it references Toynbee's theories about molecular regeneration, and maybe resurrection. Toynbee's work was original and seminal to many other great works. Toynbee was an overlooked visionary genius. The Toynbee tiler was too, in a different way as he has forced us against our will to examine his premise through sheer persistence and force of will.
<br />
This needs must end with his participation in the discussion, but let's lure him out rather than forcing him out. Give him the respect he deserves for intriguing us for 30 years, and he just might speak and give us some insight into why he has teased us so.<br />
<br />
<br />
<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-13532628761397230642012-02-06T02:38:00.000-08:002012-02-06T02:38:51.717-08:00Who Da Punk (Mini-msft) surrendersThe defeatist drumroll for Microsoft has overcome one of my favorite bloggers, "Who Da Punk" of <a href="http://minimsft.blogspot.com/">Mini-MSFT</a> fame.<br />
<br />
Purportedly a senior manager of Microsoft posting an anonymous blog these last eight years, "Mini" has been an ineffectual - but insightful - proponent for change. His/her blog has also been a useful avenue for Microsoft insiders (and fakers purporting to be) to vent their anguish at the misdirection of the company, its processes and HR issues. It has also become a hater focus. The comments on prior posts are quite interesting and can give more insight into the Redmond giant's internal processes and history than they might like. They're still available.<br />
<br />
But new ones are no more. In his <a href="http://minimsft.blogspot.com/2012/01/microsoft-fy12q2-results.html">latest post</a> Mini makes it clear that he can bear the incessant depression no more. He's hanging up his insider geek hat and will not vet comments any more. After 8 years, he gives up. He may comment on the quarterlies, but he'll not allow comments until the blogging software (this very one!) allows user moderation.<br />
<br />
That's the stated reason anyway. My guess is that Microsoft has upgraded their web intelligence and he's fearful of being found out.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-3916249280615897952011-02-22T00:55:00.000-08:002011-02-22T00:55:49.155-08:00This blog was a bad ideaI'm far too busy to post in this blog. I comment already on the issues of the day at slashdot, so this blog was only about longer articles and personal memes that didn't happen to come up regularly there. Maintaining a pseudonym blog when I'm also posting in my right name has become a burden. Soon I think I might pierce the veil and connect my symbolset handle with my own self. It's a dangerous thing - I've made some bold posts, predicted the future, insulted some folks. I may even have said some actionable things. But it might be interesting to see if the world can handle folk who can speak their own mind.<br />
<br />
Even if it worked out ok I would miss the handle thing, and frankly the symbolset persona isn't my professional self. Perhaps I'm a bit Schizophrenic. I would continue to operate it even after it was transparent.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-33506688.post-38981569792581393292010-09-30T19:34:00.000-07:002011-01-16T21:16:54.583-08:00Self reference postGoogle has an URL shortener. The short URL for this blog is <a href="http://goo.gl/NYGl">http://goo.gl/NYGl</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-45093067101852157112009-02-28T21:17:00.000-08:002009-02-28T21:36:43.553-08:00HP Systems Insight Manager simplifiedWhat is HP server management software?<br /><br />Modern servers for large organizations come in racks. These racks can have 5 extremely powerful and expandable servers, 42 of the thinnest servers, or even up to 128 server blades. When these computers are set up and while they're running there's no keyboard, monitor or mouse connected to them. The physical installation and the software installation and configuration are handled by completely different people. An essential piece that makes this work is a built-in system manager.<br /><br />For HP servers the built in system manager is called Integrated Lights Out, or ILO. It's a dedicated computer inside the server that works on standby power, so it's on even when the server is off. It's accessible through its own network port or a shared one. It has access to all of the server's health monitoring systems, the keyboard mouse USB and video, and can even flash the BIOS. Older servers have basic ILO. More recent servers will have ILO2. Both versions of ILO have the potential to be improved with a license key in the CMOS settings that enables more advanced features like graphics video and virtual media.<br /><br />To make working with many thousands of systems manageable, you need a coordinated system that allows you to monitor and perform operations on servers, groups of servers, or entire datacenters. That's where HP System Insight Manager comes in.<br /><br />Systems Insight Manager (SIM) consists of a web server, database and set of utilities. Although HP doesn't make a strong point of it, this type of service is called a Content Management System (CMS). The web server provides a single integrated viewpoint of all the servers. It does this by presenting a visual representation of the servers themselves. It can detect servers on the network, or you can tell it where they are. Once they're configured, servers are available in the interface, monitored continuously and managed. You can actually look at the picture of the rack on the web server and see the lights blinking. It's integrated with the built in management hardware of the servers, so by selecting a server you can perform many different options. You can power the server on and off, turn on the service ID blue light, configure BIOS settings, flash the BIOS or even install an operating system. Using the remote console you can watch the machine boot as if you were in front of it with a keyboard mouse and monitor, and use whatever graphical interface you install too, so configuring Windows or Linux doesn't require a trip to the server room. These features work outside the operating system using dedicated hardware inside the server that runs on standby power, so it works even when the server is turned off. Because it's web based, it can be made available anywhere on your network or or anywhere in the world. Systems Insight Manager runs on its own server, and can be downloaded for free.<br /><br />HP offers some software packages for sale, and provides some with each Proliant server. These software packages help in running, managing and configuring an individual server. They all also plug into Systems Insight Manager, enabling various features from the higher level view.<br /><br />The Proliant Support Pack that you get with each server includes a suite of drivers and software for supported operating systems, including a service called the System Management Home page. This service runs in the operating system, presents a web based interface, and allows you to access all of the built in system monitors and the system management hardware in each server.<br /><br />Systems Insight Manager detects this interface and installs a button to monitor each system's System Management home page. Included in the Proliant Support Pack is also a CD package called "Smart Start" that allows for the remote installation of an operating system for a single server, and it has a scripting toolkit for scripted installations. There's also an array configuration utility (ACU) that allows you to configure locally attached hard drives using HP RAID controllers. A diagnostic suite is also included, which allows you to perform certain diagnostic tests outside of the operating system (offline edition) or inside of it (online edition).<br /><br />Insight Control Environment is available at additional cost. It includes all of these modules, some of which are also available separately:<br /><br />Rapid Deployment Pack allows you to build and configure system images and stream them to individual servers or groups of servers.<br /><br />Virtual Machine Management Pack assists with virtual machines.<br /><br />Vulnerability and Patch Management lets you set up repositories for patches and automatically deploy them.<br /><br />Insight Power Manager allows you to monitor and control power usage per server and by groups of servers.<br /><br />In addition to HP servers the the Systems Insight Manager can also monitor other devices that use SNMP, the (formerly) Simple Network Management Protocol, which is supported by almost all modern network devices.<br /><br />Other vendors have similar systems for managing their servers. These power tools for server administration help reduce costs and enable fewer server administrators to manage far more servers.Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-33506688.post-82654521720020313792009-02-11T19:50:00.000-08:002009-02-26T20:32:42.334-08:00Layer 2 networking - Simplified - First in seriesLayer two is the layer of the network that lies above the physical cables, but below Internet Protocols and other session based protocols. Understanding layer two helps with diagnosing problems that might occur between your PC and the router that takes your communications off of your local network and into the greater intranet or the Internet.<br /><br />In this first installment I'm going to cover some basic terms and describe the basic equipment. In the second installment we'll go over an example network and hopefully tie together how the things work. If I get to a third segment I should be able to step up to the next level of our network model and tie some networks together.<br /><br />The real purpose of this article isn't to educate you - it's to cement these ideas in my mind in a way that is accessible to people I talk to on a daily basis. If you find this useful you're welcome to copy it in any way you like. I hereby dedicate it to the public domain.<br /><br /><span style="font-weight: bold;">Terminology<br />Technology<br />The addressing scheme<br />Reliably unreliable<br />The packet<br />Your network card<br />The Hub, extender and bridge<br />The switch<br />VLANs<br />QOS<br />Trunking<br />Routers and other gateways</span><br /><br /><span style="font-weight: bold;">Terminology</span><br />Layer 2 refers to the second layer of the 7 layer OSI networking model. Although there are other models that describe network architecture the OSI model is the accepted standard for most people. Layer two is the level that defines a "network". Below this level are devices and media. Above this level are internets and intranets. This topic is a network. We'll cover for completeness virtual networks and touch on routing between virtual networks because these are issues that are dealt with on this OSI networking level.<br /><br />IEEE 802.3 is the name of the working group that invented Ethernet and documented these standards which are still in use today, though most of the technologies were first invented by Robert Metcalf.<br /><br />IEEE 802.11 is the name of the working group that adopts standards for wireless Ethernet.<br /><br />There are other ways to do networking than Ethernet. They're all odd and/or dead, so I won't cover them here.<br /><br />An octet is 8 bits. The term byte is technically the size of word that the information processing system can handle, but let's not be pedantic. For the purpose of this article a byte is an octet is 8 bits and is represented by eight binary digits, two hexadecimal digits or a value from 0-255.<br /><br />Packets, datagrams and frames are not quite the same things. Despite this the terms here will be used interchangeably to refer both to the information being passed (data) and the control information that describes that information and how to get it to where it's going (header). The purpose of this is to make the information more accessible. If you can't deal with this please cite somebody else. Communication is not well served by excess precision.<br /><br /><span style="font-weight: bold;">Technology</span><br />We'll be discussing wired Ethernet over copper. For most of the material wireless networking is similar, but hopefully I'll find time to write them up in detail another time. For now the problem is big enough so I'll stick with wired networks using Cat 5e or better media. Fiber is an important part of modern networking, but fiber networking at layer two is similar enough that I can probably avoid discussing the differences. There are other ways to do networking, but either they're of historical interest or special purpose use only.<br /><br />The network under discussion here will be only a single Local Area Network and the discussion will end at the first router we come to. Once a router re-addresses your data, it's no longer on the same network and passes beyond this topic. The only exception is when we get to VLANs, for which a cursory discussion of routing is necessary, since VLANs are common parts of modern networks and appropriate for discussion at layer 2.<br /><span style="font-weight: bold;"><br />The addressing scheme</span><br />For layer two ethernet we have a special sublayer, the Media Access Control or MAC layer, that deals with addressing. The rules are pretty simple. A MAC address uniquely identifies a particular access device that will receive packets. A MAC address is typically 6 bytes, or 48 bits. MAC addresses are usually written as pairs of hexadecimal digits, called out in the order of transmission, such as 01-02-03-0a-0b-0c or 01:02:03:0a:0b:0c. In both of these cases the first half is referred to as the organizationally unique identifier (OUI) and the second half is the network interface controller (NIC) specific ID. The original purpose for this was to allow for specific network controller vendors to identify their products in the MAC address and still leave a way for each NIC on a LAN to have a unique ID. Since this is 30 years later, you can probably anticipate that we've run out of numbers and individual vendors have multiple OUIs, and MACs are no longer deliberately unique. That's OK, though, because these days MAC address is a configurable part of the NIC and so if you have two with the same number (address collision) you can fix it.<br /><span style="font-weight: bold;"><br />Reliably unreliable</span><br />It's counter-intuitive, but it works. On the ethernet at layer 2, the system is deliberately unreliable. There is no error detection or correction mechanism. The ethernet delivers packets on a "best effort" basis. Unexpected packets received on a port are ignored. Packets to unknown hosts are just discarded. At Layer 4 we get systems that handle detecting if communication was successful, but the equipment at layer 2 literally doesn't care. Reliable methods have been tried, but failed to keep up with the speed of Ethernet and were ultimately discarded or pushed into specialized applications.<br /><span style="font-weight: bold;"><br />The packet</span><br />The packet consist of a header and your data. If you have no interest in programming or network analysis you can safely skip the rest of this part. The header consists of a preamble that identifies the packet as an ethernet frame. It's 7 bytes with the value 10101010. This is the signal that lets the receiver know there's data coming down the wire. It's followed by the start of frame indicator, which is a single byte with the value 10101011. Then comes the MAC destination address, then the MAC source address. The next field is rather tricky.<br /><br />An optional field, the 802.11Q field, goes here. If present, the first two bytes are 0x8100, because this value is an invalid value for other packet formats. This is called the Tag Protocol Identifier (TPID). If this value is present then an additional field called the Tag Protocol Identifier (TPI) of two bytes will follow. The TPI identifies VLAN and QOS, and will be described later.<br /><br />Next comes the Ethertype field. This is two bytes. For 802.3 ethernet this is the length of the packet and valid values are between 64 and 1522 bytes. <br /><br />Next comes the data, which can be between 46 and 1500 bytes.<br /><br />Last comes a 4 byte field, which represents the result of performing an error detection algorithm called CRC-32 on the rest of the packet. The sender computes this value when sending the packet and adds it to the end. When the receiver performs the calculation on the received packet it's exceedingly unlikely that the computation will result in the CRC data unless the packet was transmitted correctly.<br /><br />And that's all. If a packet is less than 64 bytes, which isn't allowed given the required data above, it's called a runt and discarded.<br /><span style="font-weight: bold;"><br />Your network card</span><br />I'm writing this in February of 2009. Current technology is gigabit ethernet, which is probably the same capacity you plug it into. Your network interface controller (NIC) allows your computer to connect to the physical cable and communicate with the network. If you have a laptop it almost certainly has a network port you can connect to the network. Network administration is a vast and variable field. Some networks only allow connections by known systems, or have other restrictions. We are not going to cover those issues here. It's assumed the typical permissive network found in business or homes is used.<br /><br />Your NIC plugs into a wire with four twisted pairs of wires, and from there to a switch, possibly with a wall jack and premises wiring in between. Since premise wiring is just simple copper connections that extend the wires we'll ignore them here.<br /><br />Until your NIC has a physical connection to another device and they've worked out between them out to communicate, you're not "on the network".<br /><br />Your NIC or your switch or both might only be capable of 100 million bits per second (100Mbps, or fast ethernet). You might be connected directly to another PC's network card directly, which is "technically" a network, but we won't discussed this odd case. Whether you need normal cable, called a "patch" or "straightthrough" cable, or a special cable that reverses the send and receive signals called a "crossover" cable depends on a number of factors. Most NICs and switches these days have a feature called "Auto MDI-X" that straightens out these issues. Switches and network cards can also discover between them which speed each supports and automatically use this speed. The only trap here is that the cable standards for modern networking are very strict. If both the sender and receiver are capable of faster communication than the wire between them, they will suffer a horrible connection. If this happens to you, throw out the old cable and get a new one. They're cheap.<br /><br />Almost all computers these days come with at least one gigabit ethernet port, but they're not all the same. A high end ethernet controller is a microcomputer in itself and handles almost all aspects of the communication. Built in controllers often use the processor to calculate the checksum and for various other things, and system memory to hold the packets during processing. Built in controllers are getting better these days though and processors are powerful enough to handle this so you don't have to worry about that too much unless your needs are pretty extreme - and then you wouldn't be reading this anyway.<br /><br />Now look: gigabit isn't currently the top of the networking food chain. It's not even close. Unlike other IT infrastructure, networking usually progresses by 10s. The previous generation was 100 million bits per second. The current standard is 1 billion bits per second, or 1 gigabit. 10 gigabit ethernet is now widely available, and 100 gigabit is in development. There are bizarre unrelated networking protocols like Infiniband. You don't need to worry about that right now. Today gigabit ethernet is where it's at, and it's more than enough for most of the stuff you want to do if you're my target audience.<br /><br /><span style="font-weight: bold;">The link</span><br />The link is shorthand for the successfully connected physical medium that data passes over.<br /><br /><span style="font-weight: bold;">The Hub, extender</span><br />These devices are historical oddities. If you find one, throw it away and replace it with a switch. If you don't know what these are, don't worry. You don't need to know about this. You don't want to try it.<br /><br /><span style="font-weight: bold;">The switch</span><br />Although some people are trying to get this named a "network bridge" its common name is "switch". This is the key piece of equipment we'll be talking about. Switches come in many varieties and capabilities and can cost more than a half million dollars on the high end or less than 50 dollars on the low. Some switches are capable of performing "routing" at OSI model layer 3, but we won't discuss this here - we'll only consider layer 2 switching, which all switches use. The switch receives the packet from your NIC. If the NIC in the destination address is directly connected to the switch, the switch forwards it out directly to that NIC only. If the destination address isn't directly connected then a couple of things can happen. If the switch has a layer 2 routing facility like "Spanning Tree Protocol" and is connected to a similarly equipped switch, then it can know which port on the switch to forward the packet through and send it through that. Otherwise the switch forwards the packet out all of its ports except the one it was received on, or drops it depending on the switch configuration.<br /><span style="font-weight: bold;"><br />Managed switch</span><br />An unmanaged switch doesn't do QOS. It doesn't do VLANs. It probably doesn't do spanning tree. It doesn't have storable and recoverable configurations. Since managed switches start at under $200 for an 8 port gigabit switch these days, get a managed switch unless you know why you don't need one.<br /><br /><span style="font-weight: bold;">VLANs</span><br />Earlier we discussed the 802.11q part of the packet header. In addition to QOS this field has 12 bits to designate the "virtual local area network". When both ends of a link are capable of 802.11q, and are configured to use it, up to 1024 VLANs are possible. In practice not all switches are capable of using any, and some only support a limited number. In most cases only servers access more than one VLAN on a single link.<br /><br />So what's a VLAN? In as much as a LAN is a physical network, a "virtual lan" is some subset of the physical network. By applying a number to the VLAN it's possible to do a number of useful things. You can separate communication between servers and equipment based on role, and change the relationships in the switch software without rerouting the physical wires in the walls. This allows the network administrator to assign the accounting department to their own network, for example, so that the sales department can't inadvertently access PCs in the accounting department. They can also screw up this configuration so that an attentive user can access all VLANs by leaving all VLANs and QOS configured on the user's port by default.<br /><br />A port on a switch can be dedicated to a particular VLAN, and then all traffic received on that port from the end user will belong to that VLAN. If the person at that network port moves to another desk on another floor, it's possible to restrict his access only to the network resources that are appropriate for him. Inside the network the VLANs share physical links, but switches will not pass information from one VLAN to another. In order to get a packet from one VLAN to another, a router is required.<br /><br />One trick with VLANs is that you can have a two sets of switches that support, say, VLAN 11, with unmanaged switches or switches or ports configured to not pass VLAN 11 between them. In this case these two VLANs, though they share a VLAN number and physical connections, are isolated from each other. Spanning tree protocol can wind up blocking the transfer of packets on a particular VLAN if configured incorrectly.<br /><br />In addition, a LAN is a broadcast domain. Layer 2 networking contains a facility for sending one packet to all receivers on all ports on all switches on that network. Having too many users in a broadcast domain increases the likelihood one of them will go crazy and create a "broadcast storm". By segregating subsets of customers in VLANs, it's possible to limit the scope of such a malfunction.<br /><br /><span style="font-weight: bold;">QOS</span><br />QOS is about traffic priority. If you're doing VOIP or streaming video on your network and you require a connection that doesn't stutter then you probably need QOS.<br />One problem we get into here is that the QOS standard for networking, 802.1p, is differently implemented by various networking equipment vendors. They've all got whiz-bang features that justify their proprietary features. After all, the standard is only 15 years old. It specifies 8 priority "bins". How it's implemented is not specified and left to implementation.<br /><br />Most switching equipment vendors allow users to prefer a minimum percentage of a link to a particular bin. Then if no traffic is in that bin the bandwidth is allowed for other traffic, but if a stream occurs on a link then it's permitted to consume up to the minimum percentage without hindrance by other traffic on the line. When the communication passes through a link that doesn't support this, the tags are lost, so QOS delivery is limited to the segments of the network that directly support it.<br /><br />How you would use it is home, for example, is that you have a switch that supports QOS, a video server with your home movies, and a mythTV box that you watch movies on. Naturally if your spouse is downstairs downloading remastering the video on your file server of the family Christmas event you don't want that to degrade your viewing experience of Office Space. So you configure the video server with a QOS of 2 on the your Video VLAN, VLAN 90. Then you tell the gigabit switch that the port to your mythTV box is VLAN 90 and that the QOS for bin is 20%. Magically your mythTV box has a minimum of 20% of its link for video. This oversimplified example skips the part where you need at least two switches before this is useful.<br /><br /><span style="font-weight: bold;">Trunking</span><br />This is more of a business thing. There are two types of "trunking". The first is where you use one link to pass multiple VLANs. The second is where you use multiple individual links between two switches to increase the bandwidth between them. We're not going to worry about this right now.<br /><span style="font-weight: bold;"><br />Routers and other gateways</span><br />When traffic leaves the LAN it must pass through a gateway to an off-network device or network. For the purposes of this topic a router or gateway is just another computer. When we get to connecting VLANs together I'll cover this a little bit, but not a lot.<br /><br />The main discussion.<br />Whew! That was a lot of background. I don't know about you, but I'm glad it's over. Let's do some network engineering now in another post.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-77078334996457151642008-07-13T07:50:00.000-07:002009-02-26T20:53:44.552-08:00Linux GISFor some time now I've been interested in Geographic Information Systems (GIS) for Linux. This is a natural combination since there is a huge amount of freely available geographic data available for free from the US government. GIS systems take datapoints, usually as geographic coordinates (longitude, latitude and elevation) and by associating various data (stream surveys, street plans, etc.) give a graphical representation that's very flexible. It's helpful to use them to make maps or visualize different elements.<br /><br />There is a project with a long history in Linux that does this -- it's called <a href="http://grass.itc.it/">GRASS</a>. It was chosen for three projects in the <a href="http://code.google.com/soc/osgeo/about.html">2008 Google Summer of Code</a>. It's an active project with a long history and many users so it's likely to be around for quite a while longer, and it's licensed under the <a href="http://www.gnu.org/copyleft/gpl.html">GNU GPL</a> so price isn't an issue.<br /><br />GRASS is pretty feature rich. GIS systems are always complex beasts as the various methods of storing, converting and visualizing geographic data are all rich fields with long histories and good fields for varying preferences. This system allows GIS data to be stored in any of the common databases including MS Access, MySQL, PostgrSQL, MS-SQL Server, Oracle, dBASE and others as well as various common formats or flat files. It can use live files created for and by <a href="http://www.esri.com/">ESRI</a>'s <a href="http://www.esri.com/software/arcgis/arcgisserver/index.html">ArcGIS</a>, which is the commonest commercial GIS program. <br /><br />With the next version of GRASS a native Windows build will be available. For now the Windows version of the application is built under <a href="http://www.cygwin.com/">Cygwin</a>.<br /><br />Like many GPL licensed applications, GRASS has been included in a number of packages called <a href="http://distrowatch.com/table.php?distribution=archeos">distributions</a> that include many complimentary applications that target an audience with a complete suite of applications and related tools that suit a common purpose, along with the Linux operating system and all of the usual applications as well. <a href="http://www.arc-team.com/archeos/wiki/doku.php">ArcheOS</a> is an example of one that's targeted to archeologists that provides GRASS and related tools as well as a rich set of new toys to play with. I'll be using ArcheOS to set up a workstation system with GRASS. As of the current version (2.0.0) ArcheOS comes as a 1.2GB .iso file to burn to DVD for live DVD use or to install and includes version 6.2.3 (the most current stable release) of GRASS.<br /><br />Anyway, give GRASS a try and tell me what you think.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-33506688.post-81208052685565232002008-07-04T15:24:00.001-07:002009-02-28T12:50:11.408-08:00LTSP configuration (Gutsy) - Episode 2LTSP is the Linux Terminal Server Project. Because it's popular with schools it's had quite a bit of development and it has been adopted by Ubuntu as part of their Edubuntu package. It's generally used to allow a server to provide the horsepower for a bunch of thin clients. We'll be expanding it to server other useful purposes.<br /><br />We're going to use it to help clone a bunch of Windows XP computers.<br /><br />In our first episode we built an LTSP server. If you haven't read that article yet or you don't have a good LTSP server (not in production!) to work with it would be good to go back there and follow the steps so you are better able to follow along.<br /><br />There are several steps to perform here. We have to select an imaging platform that's bootable with ltsp. It has to not copy the whole drive -- just the blocks that have data in them. It has to be reasonably fast. We have to select a method of getting it the large image to the clients -- probably file sharing but possibly multicasting. We have to tie it all together. One advantage we have is that we have a large number of machines laying around with 40GB drives and gigabit ethernet to use as servers. They're surplus from a prior installation.<br /><br />To get the image size down we need to use a project that uses ntfsclone, since that's the project that knows about the contents of NTFS formats and can copy only the blocks that have data. We need a project that uses ntfsclone and works with ltsp but allows us some flexibility in how we use it. I chose <a href="http://www.clonezilla.org/">clonezilla</a>. This project is a subproject of Diskless Remote Boot in Linux <a href="http://drbl.sourceforge.net/">(DRBL)</a>, which is a similar project to ltsp. DRBL and clonezilla are projects of Taiwan's <a href="http://www.nchc.org.tw/">National Center for High-Performance Computing</a>. It has handy installers, comes in bootable CD and pendrive formats, and a version is available for network booting. Although they claim to support multicasting the process is as yet unwieldy so we'll be using the ltsp server as a dhcp and file server, and clone multiple servers to meet our bandwidth requirements. Since we're using Ubuntu for the ltsp server, I decided to go with <a href="http://www.clonezilla.org/download/sourceforge/">Clonezilla Live Experimental (Hardy)</a>. Download the .iso and burn it to a CD. <br /><br />Before we put a lot of work into making it netbootable I should probably validate that it makes a good copy in a reasonable period of time. I'll be using a recently imaged old laptop that won't be a disaster if I mangle its image, just in case clonezilla does not work as advertised. My actual target clients are dual core notebooks with faster hard drives but the image is also five times the size. I'm looking for scalability on the server (serving many clients simultaneously) and on the network.<br /><br />Boot the clonezilla CD from a client you would like to clone that's connected via network to your ltsp localnet.<br /><br />pic<br /><br />Choose the boot to ram option because we're going to use run from RAM if we PXE boot. After some text scrolls past you see this<br /><br />pic<br /><br />We'll choose the english version and don't touch keymaps. <br /><br />Start clonezilla<br /><br />device image disk/partition to/from image<br /><br />We'll use the ssh server because we don't have samba or nfs set up yet on our ltsp server.<br /><br />It will automatically detect our nic and network<br /><br />DHCP is set up so we'll use that to get our address.<br /><br />It detects our server and offers it as the default.<br /><br />Port 22 is the default for ssh.<br /><br />The default account is root. We don't allow remote access from a root account so we'll use this one.<br /><br />Here we select a directory on the host. This is a good time to make the directory and ensure it is owned by the user you selected before.<br /><br />We're warned we're about to be asked a password.<br /><br />Are we sure we want to connect to a new server? Of course the answer is yes here.<br /><br />Here's a prompt with no useful information for what we're doing. Press enter.<br /><br />We're going to choose savedisk here to take a snapshot of the hard drive in this computer. When we restore we choose restoredisk instead.<br /><br />We're going to use ntfsclone, so choose the first option here.<br /><br />The default here is only to choose -c for wait for confirmation. We're going to clear that and have no options set on this screen.<br /><br />The hard drive on this PC has 5.4 GB. Using -z1 we can bring that down to 2.1, which is better for our networking. -z2 is much slower for little improvement in image size and net, it's probably a loss in speed.<br /><br />Here we choose an image name. This will actually be a subdirectory in the folder chosen previously, with various files in it.<br /><br />There's only one drive in this machine. It's a laptop. As soon as we confirm this last entry, it will begin taking the image and storing it on the server. First run with 10/100 networking took 445 seconds. Second copy to server took 442 seconds. Download with 10/100 took 382 seconds, and at gigabit speed we get 365 seconds. Obviously bandwidth isn't our bottleneck. One thing to watch out for -- on the server, storing one image both CPUs hit about 50%, considerably more than their baseline 20%. This is likely due to the encryption overhead of SSH connections. The network usage goes in spikes of about 4-12 MB/s with gigabit networking. To improve this we'll need to a different network protocol to serve the images.<br /><br />Now we check the image size on the network.<br /><br />ltsp:/$ ls -hal /home/partimag/2008-07-05-00-img/<br />total 2.1G<br /><br />That's good. Now we repeat the process but choose to download the image.<br /><br />Test the image thoroughly. Are all the files there? Perform a chkdsk. No errors? Then we've got a viable copy but the speed needs work.<br /><br />I'll try Samba next. We'll stick with the gigabit connection since it's up. I would go over the way I configured Samba, but you can figure it out from <a href="http://www.debuntu.org/guest-file-sharing-with-samba">this useful page</a>.<br /><br />Testing with samba reveals that the server processor overhead for a single gigabit connection goes from the baseline of 20% to about 22%. We've removed the processor bottleneck. We're only using about 1/10th of a gigabit link on average. We have plenty of machines available so we'll probably go with six or eight clients per server depending on how much the load slows them down.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-556922039628219922008-07-04T11:20:00.000-07:002009-02-28T12:49:12.255-08:00LTSP configuration (Gutsy) - Episode 1The Linux Terminal Server Project (LTSP) is a method of using linux as an operating system that delivers thin clients the performance of a server. It works with many linux distributions and I have previously used it with good results. I'm working on putting together a system that lets me use the ltsp architecture to also perform imaging of desktops and laptops in bulk and quickly using a complete FOSS toolchain. If I get that far I'll explore using LTSP's on-demand architecture as part of a cloud type redundant infrastructure.<br /><br />I've gotten LTSP systems up and running before. This latest evolution is giving me grief. The purpose of this post is to document the successful steps so that I can replicate them reliably. Version 5 of LTSP is pretty slick once you get it going.<br /><br />For a server platform I have an HP XW8200 with 4GB RAM and dual 3.2GHz Xeon processors. It has an 72GB U320 SCSI drive to boot from and an additional 500GB SATA drive for data. It has three gigabit network ports - one on the motherboard and two on a server grade add-in PCI-X card. I will be using one of these to connect to the upstream internet, and two for my localnets. Each localnet gigabit NIC will be connected to a different switched network. The clients will boot from the network and be offered a menu of LTSP client or imaging at boot time.<br /><br />I've selected the <a href="https://help.ubuntu.com/community/UbuntuLTSP/LTSPQuickInstall">Ubuntu 8.04 (Hardy Heron) Alternate CD mode LTSP installation</a>. It has a text-based installer that adds all of the basic stuff required to get the server up and running. It is supposed to work right out of the box, though that's not my experience.<br /><br />The first issue I've discovered is that this method will not properly install if the PC is connected to the internet during installation, but also will not if no network ports have link. The networking is universally misconfigured in these cases. The workaround for this is to unplug the NICs and plug in the one NIC that will be used for Internet into a standalone network switch. This allows the NIC to be connected and configured as the primary network interface. I've selected eth1 for this chore. After the server is up and running you can configure the network the rest of the way.<br /><br />The second issue is that if I run the install with the SATA drive connected, the system tries to boot from it even though I have the BIOS set to prefer the SCSI drive. I fix this by disconnecting the SATA drive until later in the installation.<br /><br />The third issue is that at work my tyrranical network admins detect linux package updates as abusive network consumption and throttle me to less than dialup bandwidth. To get around this I'll be doing the work at home where I have 6Mbps cable broadband I can abuse all I like.<br /><br />The next step is to configure the network. First, connect the port that you were keeping alive to the network and boot into your new system and log in. At that point you should be able to use the Internet. Then configure the other two network ports. You'll need to know your network gateway, which is given as the last line when you use the "route" command. For my purposes here it's the home router I'm using - 192.168.0.1. You will need a network address and mask for each of your localnets. I'm choosing 192.168.10.1 255.255.255.0 for eth0 and 192.168.11.1 255.255.255.0 for eth2. One pitfall here is to try to configure these ports on the same subnet. Don't do it. It messes up your routing and your server won't know where to send the packets. If the ltsp server gets its internet from dhcp, you also want to make sure neither of these subnets is the same as a subnet you might be assigned to automatically. Now we have the server up and running online. It's time to get updates.<br /><br />In the menu choose System->Administration->Synaptic Package Manager and click the Reload button. The list of software sources is pre-loaded for you. Reload downloads the current list of updates and checks them against your current install. Today against the basic installation I did there are 228 updates, of which 9 are new packages and 219 are upgrades to existing packages. It's 256 MB in all. I'm waiting for them to download and install right now. There are kernel updates in there so there will be a reboot afterward. Today there are over 24,000 software packages in the software repository and more than 1400 of them are installed in this basic configuration.<br /><br />I get a note that my ssh keys were updated. This will require rebuilding the thin client image that was built during the install. It tells me the key was stored in:<br /><br />/etc/ssh/ssh_host_rsa_key /etc/ssh/ssh_host_dsa_key<br /><br />We fix this by running<br /><br />sudo ltsp-update-sshkeys<br /><br />Once the updating is done. Now I have a current server, there's another step before I can boot the clients. During installation it warned me that DHCPD needed to be configured because it couldn't figure out what networks the clients were on.<br /><br />The log for dhcpd is /var/log/syslog<br />restart dhcpd with <br /><br />sudo invoke-rc.d dhcp3-server restart<br /><br />The next issue is that the ltsp server for some reason stores the dhcpd configuration file in /etc/ltsp rather than the default /etc/dhcp3 folder. I update the dhcpd.conf file in /etc/ltsp with this:<br /><blockquote><br />#<br /># Default LTSP dhcpd.conf config file.<br />#<br /><br />authoritative;<br /><br />subnet 192.168.10.0 netmask 255.255.255.0 {<br /> range 192.168.10.20 192.168.10.250;<br /> option domain-name "example1.com";<br /> option domain-name-servers 192.168.10.1;<br /> option broadcast-address 192.168.10.255;<br /> option routers 192.168.10.1;<br /># next-server 192.168.0.254;<br /># get-lease-hostnames true;<br /> option subnet-mask 255.255.255.0;<br /> option root-path "/opt/ltsp/i386";<br /> if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" {<br /> filename "/ltsp/i386/pxelinux.0";<br /> } else {<br /> filename "/ltsp/i386/nbi.img";<br /> }<br />}<br />subnet 192.168.11.0 netmask 255.255.255.0 {<br /> range 192.168.11.20 192.168.11.250;<br /> option domain-name "example2.com";<br /> option domain-name-servers 192.168.11.1;<br /> option broadcast-address 192.168.11.255;<br /> option routers 192.168.11.1;<br /># next-server 192.168.0.254;<br /># get-lease-hostnames true;<br /> option subnet-mask 255.255.255.0;<br /> option root-path "/opt/ltsp/i386";<br /> if substring( option vendor-class-identifier, 0, 9 ) = "PXEClient" {<br /> filename "/ltsp/i386/pxelinux.0";<br /> } else {<br /> filename "/ltsp/i386/nbi.img";<br /> }<br />}</blockquote><br /><br />Then I PXE boot a client directly attached to eth0. It gets a DHCP address of 192.168.10.250 and loads the boot image with Busybox. Then it shows the Ubuntu splash screen but then fails out to an initramfs shell. This generally indicates that the cient image that was installed from the cdrom is bad. To fix this I move the directory /opt/ltsp/i386 to /opt/ltsp/i386.original and run<br /><br />sudo ltsp-build-client<br /><br />This directory is very important. It's a "chroot" environment. We will be working with different chroot environments when we build client images, but I'm going to get the ltsp client image built and booting properly first to validate the architecture. ltsp-build-client takes a good long time to download the component parts from the repository and build the client image.<br /><br />We're not done yet. Now we update the repository sources for the client:<br /><br />sudo mv /opt/ltsp/i386/etc/apt/sources.list /opt/ltsp/i386/etc/apt/sources.list.backup<br />sudo cp /etc/apt/sources.list /opt/ltsp/i386/etc/apt<br /><br />And chroot into the client environment<br /><br />sudo chroot /opt/ltsp/i386<br /><br />Update the packages and upgrade them<br /><br />sudo apt-get update<br />sudo apt-get upgrade<br /><br />Today there are 43 packages to upgrade. Then I exit the chroot environment<br />exit<br /><br />and update the client image with<br />sudo ltsp-update-image<br /><br />When this is complete I can PXE boot the client, log in and it works fine. I have a working LTSP system. The clients boot in about 15 seconds and are ready to go immediately.<br /><br />Now is a great time to make a backup copy of your /opt/ltsp/i386 folder. If you mangle it, then you will be able to put it back.<br /><br />Next I install thin-client-manager-gnome using System->Administration->Synaptic Package Manager. This lets me see the processes on the client. I'm supposed to be able to kill them also and get a remote desktop but that's not working out. I added it to my main menu with <br />/usr/bin/gksudo /usr/bin/student-control-panel<br />The icons are in /usr/share/student-control-panel/ but they're png so you'll have to use something else.<br /><br />One quick test - shut down the client and the server. Boot the server. After it's up, boot two clients, one on each subnet port. If they all come up fine and working you have successfully built ltsp. That's it for this step.<br /><br />For the next article I'll be building the boot menu so that instead of booting to LTSP you'll have the option for a few seconds of choosing a different option, such as cloning.<br /><br />The third article will cover building the cloning image.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-88777792436049680002008-06-19T23:46:00.000-07:002009-02-28T12:46:39.021-08:00Ubuntu + LTSP + DrQueue = Render cluster<p>The latest version of Ubuntu incorporates the venerable LTSP project in an interesting way -- any chroot environment can be configured as an environment to be PXEBooted. Since PXEBoot has been built into every consumer machine for five years, many new things are possible.<br /><p>LTSP is designed to be a way that ancient desktops and modern thin clients can be configured to save money on the point-of-access. This new facility means that much more can be done with it. Understanding how requires a bit of explaining.<br /><p>A chroot environment is a configuration in Linux where the user can (CH)ange the (ROOT) directory to some subdirectory of the current computer. It's used in services to isolate a particular service or user's environment so that they can't access things they're not supposed to. It's like a limited virtual machine. It can be configured the same way as a normal environment would -- with local applications, events, all the usual stuff.<br /><p>LTSP extends this by building the chroot environment into an image file that a booting machine can use as its own <i>real</i> environment. By controlling the chroot environments issued to various machines based on MAC address (an address unique to the machine or network card) one can assign a specific chroot environment to a particular machine. This allows the LTSP to issue a thin client linux to ancient computers that deliver a modern experience using the server's greater computation power. It also allows the system to send special environments based on the client's architecture. PowerPC Macintosh computers require a special one, as do some others. You can even PXE boot a virtual machine -- so as to leverage virtualization technologies and server consolidation dynamically. A controller process can be configured to monitor loads on your network at dynamically launch virtual machines to handle the loads as the need requires.<br /><p>It has been possible for some time to build a redundant architecture for every common service that uses various network and software methods to assign work for one service to multiple servers. By leveraging this PXE boot, specific environments for specific services, and assigning machines to service tasks via MAC addresses it's possible to create a redundant architecture to provide all of these services that scales to <i>any</i> size.<br /><p>This changes a great deal in infrastructure design. Every server can round-robin to whatever server is available. When services are slow: add another server to the list that receive the image for that service and boot it. It will automagically configure itself to receive a share of the load and serve clients. Need more power in your render cluster? Buy as many render nodes as you need and PXE Boot them -- no touch configuration. A node fails? That's fine. It's all redundant. Swap it out and move on. Even the LTSP servers <i>themselves</i> can be made redundant in this way, so that as long as one persists the architecture will survive.<br /><p>What I think is cool about this: You can build the most powerful render cluster in the <i>world</i> without writing even <i>a single line of code</i>. That's right - the programmer-free cluster. It's all off the shelf hardware and software.<br /><p>Over the next few weeks I'll be building a render cluster using cheap equipment. Watch this space to see what I can do with it.Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-33506688.post-15406540111644167872008-03-29T10:14:00.000-07:002008-03-29T11:24:02.829-07:00Networking education resources<p>If you want to get a good basic understanding of how basic networking works, you could do worse than to take the <a href="http://www.hp.com/rnd/training/technical/primer.htm">ProCurve Networking Primer</a>. It offers the fundamentals in an easy to understand self-paced course. It doesn't have a lot of vendor bias in it.<br /><p>HP offers a great deal of training in fundamentals for free. Some of it is specific to their products and some of it is not. On the <a href="http://www.hp.com/rnd/training/tech_training.htm#jumptocontent">ProCurve Training</a> page you will find some materials to study if you are interested in these things. It's accessible to the public.<br /><p>A lot of the HP training is available free to the public but it's hidden behind <a href="http://education.itresourcecenter.hp.com/TrainerII/en_US/index.jsp">a membership page</a> so people can't find it easily. This is silly because the training itself is hosted on a public FTP server. For example the exam preparation guides are <a href="ftp://ftp.hp.com/pub/hpcp/epgs/">in the epgs directory</a>. There's quite a lot of interesting stuff on <a href="ftp://ftp.hp.com/pub/">ftp.hp.com</a> and it's wide open for browsing.<br /><p>IBM also has a good deal of online training available <a href="http://www.ibm.com/products/finder/us/finders?pg=trfinder">here</a>.<br /><p>Naturally MIT's Open CourseWare <a href="http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-829Computer-NetworksFall2002/CourseHome/index.htm">covers networking as well</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-3232416478242017632008-03-28T23:05:00.000-07:002008-03-29T10:08:49.743-07:00Nettop, netbook, Mobile Internet Device, Blah<p>Yeah, there's lots of vapor in the air regarding "thin is in" low power, small performance mini notebooks and portable PC components. It would easy to grouse about how we've heard this before and when the air cleared the thing cost $2500 if you could buy it at all and was lame until you dropped it, at which point it was worthless.<br /><p>The thing is, that story's over. Flash storage as a medium has matured and become much cheaper. You can get small LCD (or newer tech) monitors at ridiculously low prices because of economies of scale. The small LCD in the eee PC for example is used in point of sale equipment, digital photo frames, kiosks, and a number of other devices. With a low power processor that's also cheap the Bill of Materials on this equipment starts getting interesting.<br /><p>At IDF in a few days the NDA's for lots of companies building platforms on Intel's Diamondville and Silverthorne (nee Atom) processors expire and we're going to see what kind of device the major manufacturers can build with a 0.5W - 2.5W processor that is very cheap, runs IA32 architecture and clocks at reasonable (1.8 GHz?) speeds. I think there will be more than a few surprises in store.<br /><p>I'm going to speculate there are more than a few that are a decent laptop computer that costs about what consumers are currently paying for an MP3 player like the Zune or the IPOD. That's going to drive a lot of market in the third world. It's going to change a lot of things about the bottom end of the laptop market. Some of these things are not going to be computers at all, but they also will be really cool.<br /><p>One thing's for sure though: If any of them run Vista, they won't do it well.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-53191316123251662452008-03-08T15:51:00.000-08:002008-03-29T10:10:58.176-07:00Will Intel's Atom be a smash?<p>The <a href="http://www.eweek.com/c/a/Desktops-and-Notebooks/Intel-Drops-an-Atom-Brand/">buzz has begun</a> on Intel's <a href="http://www.intel.com/technology/atom/index.htm">Atom</a> processor. Formerly known as <a href="http://en.wikipedia.org/wiki/Silverthorne_(CPU)">Silverthorne</a> and <a href="http://arstechnica.com/news.ars/post/20071203-intels-low-cost-diamondville-cpu-to-power-olpceee-pc-mobile-category.html">Diamondville</a>, this disruptive technology is set to sweep the world by summer.
<br /><p>What is it? Atom is a processor. It consumes between 0.6 and 2.5 watts running full out depending on the clock speed, and as little as .03 watts in sleep mode. It is tiny- 25 square millimeters or roughly 3 millimeters by 9. It's x86 compatible, as it's derived from Intel's Core architechture. Clock speeds for it are currently estimated at 1.8GHz at the top end, 500MHz at the bottom. The technology is capable of either hyperthreading or dual core.
<br /><p>What's the big deal? This is huge. Look at the <a href="http://support.microsoft.com/kb/314865">requirements for Windows XP</a>. 300MHz Pentium class processor with 128MB of RAM. This thing easily clears even the recommended requirements at the minimum 0.6 watts power level. With an <a href="http://arstechnica.com/news.ars/post/20070312-intel-gets-into-the-flash-hard-drive-game.html">Intel</a> <a href="http://www.intel.com/design/flash/nand/z-p140/">Solid State</a> hard drive</a> and 1GB of RAM this thing is a whole PC that fits inside a tin of Altoids and runs on AA batteries or can be embedded inside a 22" monitor for about the cost of the cardboard box it comes in. In silicon small = inexpensive and this is <i>tiny</i>. This moves a real PC into the realm of affordability for a huge segment of the world's population that was previously not served.
<br /><p><a href="http://www.xbitlabs.com/news/mainboards/display/20080306120609_Intel_Prepares_Its_Own_Mini_ITX_Platform.html">Mini-ITX</a> is a popular platform, and VIA gets up to $300 for their 1.5GHz platforms in this form factor. <a href="http://www.xbitlabs.com/news/mainboards/display/20080306120609_Intel_Prepares_Its_Own_Mini_ITX_Platform.html">"This complete platform is expected to be priced at no more than $50-60 in retail."</a> Wow. Just wow. The implications for car PC and embedded media player applications is enormous.
<br /><p>What else? There's a Centrino Atom chipset aimed at the Eee, OLPC and Classmate class of cheap notebooks with wireless, fair video, and all the usual goodies that stays at low power. Over 50 subnotebooks that Intel is now classing as "Mobile Internet Devices" or MIDs are launching right away. Phone applications are obvious. Perhaps less obvious are the implications for home routers, thin clients, toys, home robotics, gumstix, Network Attached Storage, Wireless mesh networks, military applications, POE webcams, supercomputer applications and workstations.
<br /><p>The downside: Although some vendors will claim the "<a href="http://blog.seattlepi.nwsource.com/microsoft/archives/133778.asp">Vista Capable</a>" label, we all know what that means. It means that the PC is incapable of giving a <a href="http://blogs.computerworld.com/node/5375">good experience</a> when loaded with Vista. A version of Ubuntu is available for it already, though, that runs Open Office just fine so you should be able to open those PowerPoint presentations in <a href="http://www.linux.com/feature/40736">Impress</a> without any trouble on your Mobile Internet Device.
<br /><p>The interesting question really is "what would you do with it?" Really. Pretend for a moment you're a platform engineer and tell me what you would do with this thing.
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-85947133318782590242007-12-30T16:47:00.000-08:002008-11-12T20:29:36.175-08:00Zscanner 800<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9_9-TgAls2k8s-aHIT_XSK7GvY3kzjePZLWZ3HZ1DMfYyXvtEBhyNmNS_8b32JY4klE4jQ45cfRkzn8gFonJyjQ1r-VeEHECJG-Ivf3B2lXD89_EPmkCxMIvdOAxk0u55_N2Akw/s1600-h/zscanner800-0141.PNG"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9_9-TgAls2k8s-aHIT_XSK7GvY3kzjePZLWZ3HZ1DMfYyXvtEBhyNmNS_8b32JY4klE4jQ45cfRkzn8gFonJyjQ1r-VeEHECJG-Ivf3B2lXD89_EPmkCxMIvdOAxk0u55_N2Akw/s320/zscanner800-0141.PNG" border="0" alt=""id="BLOGGER_PHOTO_ID_5149934349126270402" /></a><br /><p><a href="http://www.deskeng.com/articles/aaafgc.htm">The Zscanner 800</a> looks like a nice gift for the budding 3d developer. It's a handheld trinocular camera with laser LED projector that interpolates points in 3d, in real time.<br /><p>Just the right size for scanning the faces of your whole family so you can update their secondlife avatars. Output is .stl files.<br /><p>At $50k for this version getting models from the real world into the virtual world is getting cheaper all the time. The intellectual property ramifications of this technology coming within the reach of common citizens must be astounding.<br /><p>This is still out of reach for me, but I would not mind renting one for a few days.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-72395343621415112422007-12-27T20:10:00.000-08:002007-12-27T20:13:56.854-08:00Switching to linuxLots of people are thinking about switching to linux these days. There's a website called Groklaw dedicated to documenting legal events, and they have quite a useful page on the subject. Rather than working up my own, I'll <a href="http://www.grokdoc.net/index.php/Switching_to_Linux">point you there.</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-37150003071419003182007-05-28T09:28:00.000-07:002007-05-28T12:35:26.636-07:00Video on phone update<p>Recently I had to replace my video-capable phone with a Blackberry, and as luck would have it I got mine only days before the Blackberry that does video and audio came out. It was a tradeoff. My thumb hurts because the BB scroll wheel is a poor substitute for a touch screen. On the other hand, mail arrives quickly and I can use it as a wireless modem for my laptop.<br /><p>Since my blackberry doesn't do video any more, I chose the <a href="http://www.newegg.com/product/product.asp?item=N82E16855507004">Centon moVox 1GB</a> for that. It's $50 delivered, plays mp4 videos and mp3 audio, and stores a gigabyte of info. You format it and drop files into it like a pen drive, and they play just fine. It charges from USB too. I've tried the methods previously described here for converting video, and they work just fine. Now I can store four two hour movies on it and a couple albums, and reserve my phone for other stuff.<br /><p>You can't tell from looking at the Newegg description, but the thing is very small. It's about 1cm thick, 3cm wide and 4.5 cm tall. The buttons are not intuitive -- you have to play with them to figure out that ff+play = enter on the menus. The manual is no help -- it's an amusing example of engrish, and lacks even an identification of the buttons.<br /><p>There is no output for external video -- this thing is strictly a microscreen player. If you're interested in a little recreational video in a pinch, though, it will do the trick. If you get one, don't forget to format it FAT. You can't store any media on it until you do. It comes in more expensive versions up to 8GB. I have no idea what battery life is, but after I've used it a while I'll update this post. As always, Newegg delivered promptly and as advertised. It's a lot more fun to just fire off an online order in the middle of the night than to burn a precious day and five gallons of gas wandering from (hopefully open!) store to store searching for something that might or might not be good.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-67780063166160795912007-05-28T09:23:00.000-07:002007-05-28T09:27:56.951-07:00Realtime satellite photos of AustraliaYou can see cool weather photos of Australia from <a href="http://realtime2.bsch.au.com/vis_sat.html">BSCH</a> which is the <a href="http://www.bsch.au.com/">Brisbane Storm Chasers</a> homepage.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-64194615602489505532006-12-30T16:37:00.000-08:002006-12-30T19:39:22.187-08:00Play that DVD on your phoneSure, you paid for that movie on DVD. You didn't care what format it was in, you just wanted the movie. Now you're busy like me and you want to watch it on your cell phone or handheld video player. This article tells you how it's possible, without buying any software.<br /><br />Before I get into that, though, I have to warn you. The movie studios don't want you to have that content available in the format you desire. You're going to break the laws of several nations on your way to convenient video. Such is life. When you get busted, I didn't tell you to do it, I only told you how. BTW, this guide is for media shifting and legitimate backups only. Don't go using this information for sharing purposes. If you didn't buy access to the content, you've got no right to it. If you're not sure you have a legal right to make backups in your local area, consult the services of a legal professional before continuing.<br /><br />On with the show. This isn't the only way to do this, but it's a convenient way if you have the necessary equipment. You'll need:<br /><br />A windows computer<br />A linux computer<br />A device to play your video on that plays MPEG2.<br /><br />The first thing you need to do is to get it off of the DVD and onto the hard drive of your windows PC in an unencrypted format. For this you can use the program DVD Decrypter. Just install it from the downloads section of <a href="doom9.org">http://www.doom9.org/</a> and insert the DVD. The default settings almost always work. You'll want to do one movie first, just to be sure you have it right. After that if you're converting bunches it might be best to decrypt several at a time.<br /><br />The next thing is to get it out of the DVD file format and into something a little more portable. For this you can use Auto Gordian Knot (AutoGK). It's available from the same source. In this step choose encoding as AVI with Xvid video codec and MP3 VBR for the audio. For output settings choose 100 percent target quality, rather than size or CD's based output. That way your output will come all in one file and it will only require one pass (it's faster). Also, multiple sessions of lossy compression introduce unwanted artifacts in the output. Always keep the maximum quality until the final rendering step. Don't worry -- the file will be small enough to fit in your player when you're done. If you're doing a batch, choose Add Job and then select a different disk folder and output file and add it to the batch. Click start.<br /><br />AutoGK uses some external programs to convert your file according to the settings you gave it. They're installed with AutoGK, though and you don't have to think about them much. The first time you use them though, at various steps you'll be prompted to accept their license terms. AutoGK is doing the really hard part here. It's converting the original coded and compressed video into a different video code basically by uncompressing the images and then recompressing them in the new format one at a time. A movie has hundreds of thousands of image frames, so this can take some time. In addition to this it's doing the same thing with the audio portion of the movie, and keeping the sound in synch. If your computer isn't completely stable and reliable, this is when you find out.<br /><br />Once you have your output .avi file, you need to get it over to the linux computer so you can process it with ffmpeg. It's possible to get ffmpeg installed on your windows computer, but there's no way I can provide instructions for that in a blog post. If your computers are networked, you can just save the file over to the linux box. Otherwise you can burn the file to a DVD. It can be pretty large, though. Some of these files will be more than 2GB. You can also use an external hard drive. External HDD's are available in larger sizes. Most current linux distributions can read the files off of an NTFS formatted external hard drive. If the hard drive is formatted with FAT32 instead, you'll have to keep your file sizes under 2GB. However you get it there, you'll want the .avi file on a hard drive that's local to your linux box.<br /><br />When you have the file on your linux box the last step is easy. You'll need ffmpeg. It comes with most linux distributions, but if you don't have it, get it the same way you get your other software. (Note for windows only users: linux usually comes with software to install thousands of useful programs like this for free.) While you're at it get vlc (videolan-client) as well for watching movies. Run ffmpeg on the file like this:<br /><br /># ffmpeg -i IN.avi -s 352x288 -ab 32 -ac 1 -b 64 -ss 25:00.00 OUT.mpg<br /><br />Replace IN.avi and OUT.mpg with the files you want of course. The options are like so:<br />-i IN.AVI - the input file<br />-s 352x288 - The output resolution. Use what's appropriate for your device.<br />-ab 32 - Audio bitrate 32kbps<br />-ac 1 - Mono output<br />-b 64 - Video bitrate, 64kbps<br />-ss 25:00.00 - Start 25 seconds into the video<br /><br />How long it takes to convert varies with your computer power. On a Core2 Duo laptop the decrypting takes about 20 minutes. The convertion to AVI takes about an hour. On an Athlon 2500+ the .mpg conversion takes about 30 minutes. All of that for a typical 90 minute movie, using the settings above. The finished movie might be 96MB. Quality is about what you would expect for watching a DVD on your phone.<br /><br />The last step is to get the .mpg movie into your device. SD media is great for this, or you can use whatever synch system comes with your device. Since it's unencrypted baseline video, it should play in almost anything that claims to be a video capable device. A two GB SD card holds about 20 typical movies.<br /><br />If you have better ideas for how to do this more conveniently, I would love to see your comments.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-48719356886618944792006-12-16T20:55:00.000-08:002006-12-16T22:04:45.255-08:00Flash HDDA lot of noise is being made these days about Flash chips and their potential use as system hard drives. I thought I would write about some of my impressions on the subject.<br /><br />The first issue is write cycles. In the past Flash media was good for less than a million writes or so. This was completely unsatisfactory for most uses as an operating system media because systems generally use the hard drive as a swap, or short term place to store programs and data when they're not busy. In an active system the swap memory can be rewritten thousands of times a day, as programs are swapped into and out of memory very actively.<br /><br />There are operating systems that don't function in this way. Certainly this process was necessary when computers had little memory, but with cheap PCs able to handle eight or sixteen gigabytes of memory, but it doesn't seem to be as necessary as it once was. Choosing to avoid the issue in this way does limit your choice of OS, but not horribly so.<br /><br />Recent advances in flash memory have extended the life of the flash memory into hundreds of millions, or more, of write cycles. If you're willing to accept a lifespan of five to ten years for your flash memory, you should be fine with what's available now. Certainly we can expect this trend to continue. Flash will get more durable life cycles until they finally are good long after one would normally consider them obsolete.<br /><br />Currently available flash media (December 2006) comes in sizes up to 32GB for Secure Digital media. That's a lot of memory for a card that small, and it comes at a premium price. When I first bought a flash drive the largest available size was 64MB, and it was only a few years ago. Since sizes have increased five hundred times in just a few years, and manufacturers are even now working on several generations of denser media. Since a large operating system install should be no more than 8GB currently, and that size is available, Flash Media has cleared the hurdle of being large enough to handle the job. Although larger form factors are available now for 32GB media, the cost of the flash chips is enough to prevent a large market for the devices and so they're not yet common except for those who have no budget constraints.<br /><br />Speed is another issue when considering Flash media for your system drive. Although Flash currently can be much slower than HDD media, that is changing as ever more chips are added and accessed in parallel. Already you can get media that reads and writes faster than an ATA HDD. Soon they will be much faster. More importantly, since Flash has no moving parts there is no latency to speak of and every file is as close as any other. This simplifies much of the disk access process and makes file access much faster. Speed is about to cease to be an issue for Flash media, and already for most uses it's faster than a hard drive. With Flash a cache might be necessary for some applications, but different from HDD media, the amount of time it takes to flush the write cache is predictable and controllable from the system rather than the controller or on-drive electronics. This makes shutdown issues go away almost entirely.<br /><br />Power issues are important for storage media -- not only for battery life but also for heat. Here Flash has long been a big winner for cameras, smart phones and PDA's. Because Flash is a static medium, no energy is required to maintain the data stored within or make it accessible. There is no spin up time, no idle power at all. If you're not writing to or reading from it, it uses no power. Naturally devices that use no power generate no heat. Even in their most power hungry use, writing, Flash media doesn't take as much energy as HDD media at idle. The heat issue is an important one because the more thermal energy a device dissipates, the larger it must be to passively cool, and active cooling adds energy costs, size and noise as well.<br /><br />Flash media has no moving parts. It is utterly silent. This by itself makes it a preferred medium for applications like fanless computers in audio recording environments and low energy entertainment center platforms where even small amounts of fan noise are unacceptable.<br /><br />There are already many distributions of Linux that can be installed to Flash media. Soon this will be a standard install option across nearly all distributions. Installing to Flash media can be very handy for workstations. Rather than evolve a fancy network system for maintaining each user's settings in a portable way -- that doesn't work if the network is down or the user is offsite, the user can just extract his boot chip and take it with him. Then he can arrive with his full toolset and get to work without worrying about which applications don't install correctly at the new station or aren't integrated properly or don't have his preferred settings.<br /><br />Price is often an issue. Flash media costs many times what Hard Drives cost for the same amount of storage. Today I can buy a 2GB SD media for about $30.00. It would take 250 of those to make the volume of a 500GB HDD that costs $150.00. For this article however I'm not talking about a volume for storing your media or your database. 8GB should be plenty right now for an operating system and suite of applications for normal use, and that can be had for a reasonable price so I think Flash has cleared the affordability constraint as an operating system install medium.<br /><br />For these reasons I think we're approaching the day when a Flash Drive is a part of the motherboard on desktop and server systems for the purpose of OS and applications install at least as an option. It seems likely to become the standard for media center PCs as well. I also expect to see more support for installing to this sort of media. It seems reasonable that advances in Flash media will continue to outpace progress in other areas of information technology. In addition for highly portable devices like smartphones richer suites of applications should soon be available to exploit the advantages of having larger static storage available.<br /><br />Some consolidation in the Flash memory industry seems likely as well. Highly competetive markets like this one erode profit margins and depreciate inventory disruptively. Obviously in this environment purchasing a competitor can be cheaper and more effective in the long term than inventing a new process that increases the storage density or speed of your product line.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-78352101932226550462006-11-19T16:27:00.000-08:002006-11-19T16:36:51.005-08:00HDTV 720p, 1080iIt's 2006. There's a lot of product shipping right now in the HDTV market, and a bunch of people who are buying in early are going to be unhappy.<br />1080p is the full HDTV resolution that is supported by BluRay and HD-DVD. It looks totally awesome. It is supported on the PS3, since it's BluRay compatible.<br />One of the amazing things about the pace of tech progress is that often times the next generation of product will arrive at the same price point as the previous generation. Right now you can get a generous 42" 1080p monitor with HDMI and/or DVI input for under $1500. That's actually less than many of the 720p or 1080i monitors I've seen at the same size.<br />Early adopters usually expect to bear the brunt of the costs for new tech, but now that the 1080p monitors are available many of them will choose to upgrade from their previous 720p LCD or plasma selections. Someone who's considering getting one now shouldn't consider getting the lower resolution because they'll only have to upgrade later and pay the same price twice.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-78491696680712113322006-11-12T23:13:00.000-08:002006-11-12T23:22:39.742-08:00FiremumbleThe browser Firefox is getting a lot of attention lately because they've asked the Ubuntu people to stop shipping their modified version with the "Firefox" symbols. Ubuntu will be naming it "IceWeasel".<br /><br />Not everybody remembers that there used to be a <a href="http://addons.mozilla.org/firefox/31/">Firesomething</a><a href="http://addons.mozilla.org/firefox/31/"></a> extention for Firefox that allowed you to change the program name, or for it to change randomly. It can be a lot of fun. The developer has stopped supporting it, but apparently the extension can be modified to work with version 2.0, so we can have the fun all over again.<br /><br />The joke here is that the Firefox team had to change the name a couple of times early on, so as not to conflict with extant products.<br /><br /><a href="http://addons.mozilla.org/firefox/31/"><span class="down" style="display: block;" id="formatbar_CreateLink" title="Link" onmouseover="ButtonHoverOn(this);" onmouseout="ButtonHoverOff(this);" onmouseup="" onmousedown="CheckFormatting(event);FormatbarButton('richeditorframe', this, 8);ButtonMouseDown(this);"></span></a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-47913457156260716002006-11-11T23:18:00.000-08:002006-11-11T23:38:26.732-08:00Toynbee ideaAs long as I'm blogging about Enigma, I may as well add another. Have you heard about <a href="http://en.wikipedia.org/wiki/Toynbee_tiles">Toynbee tiles?</a> They're laid in streets all over the world, and one of them once nearly killed me. I was crossing a busy street and there in the asphalt was the sign:<br /><blockquote>TOyNBEE IDEA<br />IN KUbricK's 2001<br />RESURRECT DEAD<br />ON PLANET JUPiTER.</blockquote>Struck dumb for a moment as I unravelled the meme, a bus almost smashed me.<br /><br />Anyway, if the story intrigues you, you can check out the article on Toynbee at <a href="http://en.wikipedia.org/wiki/Arnold_J._Toynbee">Wikipedia</a>, read some of his works at <a href="http://www.gutenberg.org/browse/authors/t#a3329">Project Gutenberg</a>, or (extra credit) find page 22 of the Feb 4, 1958 issue of The Atlanta Constitution. For sharing the enigma, you can as usual get themed merch from <a href="http://www.cafepress.com/buy/Toynbee/-/cfpt2_/copt_/cfpt_/source_searchBox/x_0/y_0">CafePress</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-15856366378828928522006-11-11T23:04:00.000-08:002006-11-11T23:39:52.681-08:00Where in the multiverse is John Titor?John Titor was a poster on Usenet many years ago, who claimed to be from the future. His story remains an enigma. You can read more about him on <a href="http://en.wikipedia.org/wiki/John_Titor">Wikipedia</a>.<br /><br />If you're so inclined, you can buy John Titor themed stuff from <a href="http://www.cafepress.com/johntitor">CafePress.</a> It's the kind of inside joke that takes a special person to appreciate. What intrigues me about the story is that I learned programming on an <a href="http://www.postbulletin.com/magazine/2004/08/index.shtml">IBM 5100</a> and was familiar with some of the material discussed. I think I read some of his articles when they first appeared.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-57245882982448173592006-11-11T22:52:00.000-08:002006-11-11T22:58:57.960-08:00Discover the GRAIL<a href="http://grail.cs.washington.edu/">GRAIL</a> is the Graphics and Imaging Laboratory of the University of Washington's Department of Computer Science and Engineering.<br /><br />What's interesting about it is that they have a lot of <a href="http://grail.cs.washington.edu/pub/">papers</a> available on computer science as applied to graphics.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-33506688.post-35289567699908735522006-11-11T22:31:00.000-08:002006-11-11T23:40:47.343-08:00Music from mathThe application of math in music is an interesting concept with a long history. Rather than bore you with it, let me suggest <a href="http://tones.wolfram.com/">Wolfram Tones</a> as a primer.<br /><br />This website has a number of generators that let you create music. It's done by the people behind Mathematica. One cool feature is that once you've found or made a composition you like, you can send it to yourself as ringtone.Unknownnoreply@blogger.com0