The price and size of computing has been getting progressively smaller year by year but suddenly there has been a major change. Among the reasons for this are the scale of the market; the reduction in component costs; the use of free software such as Linux, the effects of the OLPC project on manufacturers and the developed world and the developing web 2 culture. All the factors have come together to create the conditions for a new type of computer to be successful – the low cost Ultra Mobile Personal Computer (Umpc).
There have been Umpcs before but they have often used miniaturisation to justify high prices or have not been particularly practical. However, Asus have created a huge impact with their Eee PC – a significant departure from standard laptop offerings – A £200 price point, all solid state (no spinning hard disk to slow things down and drain batteries), just enough local memory to get by on and the use of Open Source Software (although Asus now sell a version of the Eee PC running Microsoft software.
I find the Asus Eee PC to be very impressive
– A £200 price point means we can purchase in large numbers and achieve new effects by scale
– A £200 price point means that more users can purchase their own – helping with social inclusion and achieve new effects through personal computing.
– Small but useable- this size of computer is easy to carry around and is genuinely mobile
– Quick to start up – information and communication are far more pleasant without having to wait 2 or 3 minutes for the computer to let you get started
– Very easy to use – Everything you need for most tasks is already installed and is easy to use
– Fits well with the developing model of cloud computing where we use the net for applications and storage. The Eee PC is quick and easy to get on line and comes with icons to connect you to Google Docs for example.
– Can accommodate standard local computing – it comes pre-installed with Open Office for standard “Office” applications which are easy to use and compatible with Microsoft Office too.
Like all successful products it is in the right place at the right time – it is a perfect consumer computer for the masses. It feels less like a computer and more like a “gadget” something useable by a wider range of people than most computers.
I will be exploring possible applications for the Eee PC in education over the year. Some of the applications I will be looking at are:
– Use on external projects like work experience, trips and community use
– Use in non IT suites as an information appliance
– Use in new models such as allocating to individual students on various courses or projects.
In the video Richard dons a white lab coat to investigate the technical aspects of the Asus Eee and Penny talks briefly about the effect such technology can have in teaching.
Richard carries out a boot race between the Asus Eee running Xandros Linux and a standard tablet computer running Microsoft Vista. Before Richard has a chance to log in to Microsoft Vista he has used the Asus Eee to get on the Internet and Google Docs and to launch Open Office for wordprocessing. The Eee PC is about 30% cheaper than a standard laptop, 30% smaller and lighter and 30% faster to start and stop – 35 seconds after pressing the on button you can be surfing the net.
Penny from the design team talks about how the use of personal IT can change the nature of teaching and learning as information is readily accessible to students – opportunities for more research based learning are possible. Penny also talks about how more and more students have smartphones with which they can take pictures, send emails and browse the Internet.
When is a computer not a computer – It looks like we are in entering a new phase of computer diversity – it looks like 2008 may see the beginnings of some exciting new developments in education and IT.
The current period is a very exciting and busy time in IT (the euphemism is challenging). For IT managers it can feel like being stretched on rack in many directions at the same time. Here are some of the dimensions on which IT managers are stretched and squashed. Welcome to the IT “Torture” chamber – pain or pleasure?
Stretched on the x axis (width/ Scale)
IT managers are used to increases in scale – year on year increase in the quantity of devices to accommodate. In education almost all staff have their own computer and many have more than one e.g. desktop, laptop and smartphone. We add extra student computers and IT suites each year and now consider extending the personal computing idea to students – providing students with laptops under various schemes and starting to accommodate their own equipment.
Stretched on the z axis (depth/scope)
We no longer talk about digital convergence – these days almost everything is digital – almost everything is mediated by IT. IT managers have to accommodate an increasing range of hardware and software for all manner of applications – from standard computing to multimedia, video conferencing, web 2 and users own equipment. We also need to accommodate remote access for increasing amounts of distance learning, home working and collaboration – IT access anyplace, anytime 24/7 more, more, more.
Stretched on the y axis (height – past and present)
The dimensional analogy is breaking down here but think of the poor IT manager’s feet anchored to “legacy” systems and arms pulled in the opposite direction by new and developing systems. As the years pass we have to add more and more systems to our support and operational skillset . We have to maintain these “legacy” systems while changing the infrastructure that supports them (hardware, software and compatibility issues) while remembering how to sort out their problems. For anythijg that is installed we have to deal with constant updates (security and reliability patches/service packs, extensions, additions etc). We also have to deal with the demands for the latest hardware and software – how to install, operate, secure, maintain and integrate them with existing systems.
Compressed on the t axis (Time) – running faster on the wheel
We need to deliver more and deliver it quicker – turnaround times everywhere are getting shorter. There is a kind of positive feedback loop – IT development is getting faster and as society is mediated by IT then timeframes in society get shorter – “faster is the new fast”
While having to do more and do it quicker we have at the same time to ensure the reliability and security of IT systems. Society depends more and more on IT – downtime is less tolerated than it used to be and the consequences of downtime more significant. While society is increasingly mediated by IT so all the issues in society are also mediated by IT – crime, vandalism etc – while we are ever more dependent on IT the security of IT is harder to maintain.
So when accessing data systems while sipping coffee in an Internet cafe please bear a thought for your poor IT manager down in the IT torture chamber – coping with extending access while at the same time securing data and maintaining reliability.
SOS – Save Our Systems
Email overload is a problem for our users and our systems.
Save yourselves and your systems – kick the habit – reduce your “email dependency” – there are other ways
– Don’t use college email for personal use
– Don’t use email for files
– Don’t use email for non-urgent matters
– Don’t use email for discussions
– Use email for private and “confidential” communications
– Use email where urgency is needed
– Use email only to grab attention (one-to-one or one-to-many)
– Use blogs, discussion groups or social networks where many-to-many interaction is required
– Use file sharing sites like Flickr, Youtube where files can be shared in public
– Use social networking sites to share information with a community of users
I’ve been thinking about what could be described as defensive organisations – about how the risk of failure becomes paramount and how management systems develop to focus on the past to justify the present – we risk tripping over the obstacle ahead while we are looking behind.
I’ve also been thinking about the importance of living outside your comfort zone in order to develop and how so many people and organisations spend effort to avoid moving outside their comfort zone.
So to “kill two birds with one stone” I’ve decided to risk failure and move out of my comfort zone to see what happens with a video blog – video/audio work is something I’m not comfortable with at all but the only way to improve is to have a go – so here goes – it could be fun.
This is the first video to outline what I’m planning to work on in the week ahead – please feel free to leave your comments
Consumerisation and personalisation are the underlying trends in recent education thinking, technology developments and our culture generally.
This blog attempts to combine recent educational proposals with recent IT developments, describe some of the challenges and make some suggestions for meeting these challenges.
Recent educational papers promote ACTIVE LEARNING through consumerisation and personalisation. They promote demand led learning, competitive learning markets, learner accounts, greater learner choice and soft skills such as research, problem solving, collaboration, communication and information management. We will also be expected to deliver learning across boundaries – in the workplace, in other institutions and at home. The educational papers suggest the mindset required.
Recent IT developments promote ACTIVE IT through consumerisation and personalisation developments in mobile technology and use (smartphones, UMPCs, WADs, mobile broadband) and through continued development in the capabilities of on-line social networking and Web 2 applications and spaces. The IT developments suggest the tools required.
My opinion is that developing NET technologies such as PIE (Personal Information Environments) and MASH (the ability to combine different information sources) will provide some of the tools to operate and learn beyond the traditional boundaries of Space (locations) and Time and allow us to deliver the Active and Personalised learning the educational papers promote. The crucial thing is that these tools are useless without the mindset to operate them and that the tools and mindset have to apply to institutions and not just to learners.
Recent educational thinking promotes active learning and soft skills such as research, problem solving, collaboration, communication and information management yet our systems (exams, quality, IT and buildings) offer an environment developed from 20th century learning approaches and don’t offer a natural environment to develop active learning.
Everyday life will be increasingly mediated by the net (information, leisure, work, learning, shopping, socialising, etc) and being “on net” is increasingly vital. This will be the context in which we will be expected to operate in the near future. We should expect to see more people seeking web access and carrying around Web Access Devices (smartphones, UMPC, laptops etc). We should expect to see more people using personal web spaces – interacting through social networks, using web 2 applications or using customised Personal Information Environments (PIE) created by MASHING applications, feeds and links.
In education there is a tension and a challenge. Young people, teachers and institutions are operating at different speeds within different contexts – young people (major educational consumers) are relatively comfortable with “NET LIFE”. Teachers have some experience of “NET LIFE” but generally don’t have the time or support to explore and develop it and its use in education. Institutions change even slower – they have few incentives to engage in the risk that change brings – to do so risks upsetting hard worked quality development. Thus we have a problem – educational thinking, technology, culture and our students are all moving on at a faster pace yet how are institutions expected to deliver in the real world the reforms described in educational papers.
The challenge is to be able to provide active learning opportunities in increasingly flexible ways – for learner “consumers” to access learning where, when and how they wish. Rammel (2006) for example illustrates one aspect of our challenge – “The development of Specialised Diplomas as a modular qualification with young people taking different modules or qualifications in different institutions will present challenges.”
Suggestions for meeting the challenge
1. Re-engineer our networks
With more people (staff and students) using personal web access devices and personal information spaces we need to build our networks to allow these to operate in our institutions.
1a Re-engineer our networks provide our bandwidth to guest devices to access the net.
I’m already seeing many iPhones for example on our system. The objective is to provide a secure internal /institutional network but with some form of guest access to the Internet. One solution on wireless is to guest SSID’s to which guest devices associate and then tunnel them out onto the Internet through web access filters without “touching” our secure networks. Ultimately however the spread of wireless WAN (3G, Wimax) will reduce the need to accommodate guest devices on our own networks.
1b Re-engineer to increase bandwidth and reliability especially for our Internet connections.
Network and Internet access is everything and Internet connectivity will increasingly be seen as the priority. This means that our networks will busier and that we should be increasing bandwidth and reliability to accommodate. Increasingly the “ network is the computer” and investments in our networks should be prioritised.
2 Re-engineer our Systems
With more people (staff and students) using personal web access devices and personal information spaces and access needed from staff and students in workplaces and other institutions we need to build our systems to allow Internet access and to provide data interfaces for users. We should plan to make it possible for a learner to use their own personal information environment (PIE) to access our systems.
2a Re-engineer systems for Internet access as a priority.
All relevant learning systems should be designed and built Internet first.
2b Re-engineer systems to provide Internet data feeds and interfaces.
All data should have web access and we need to think about interfaces for users and enabling RSS.
3 Re-engineer our physical environment
Recent educational thinking promotes development active learning and soft skills such as research, problem solving, collaboration, communication and information management. The traditional classroom is not a natural environment for this type of activity – we need to develop new learning spaces that are more natural to active styles of learning.
We need to create and support experimental learning spaces in which to develop new teaching ideas. One key ingredient is that learners can change the environment to accommodate new learning – groups work / project work / net access. The other key ingredient is that there is adequate support on hand – technical and educational.
4 Re-engineer our curriculum
Current curricula remind me of trying to fit round pegs (learners) into square holes (colleges) and the problems this causes when increasingly we want round pegs. Current curricula and operation are derived from the 20th century industrial age – they have fixed time and space slots (lessons) – they have industrial style advantages in terms of quality control and management but present real problems for the active, flexible learning educational reforms being promoted. Curricula change is probably going to be the most difficult problem. How can we manage and deliver a curriculum where the resources, time and space for various activities change from week to week and where students might pick and choose what to learn.
Re-engineer curricula to be modular
This seems to be necessary to be flexible. For example it would be advantageous to be able to study business modules along with science and arts for example.
Re-engineer more of the curricula to be on-line
This seems to be necessary to be flexible. Timetable clashes might prevent certain combinations but if studied on-line then the restrictions of space and time disappear – we can study wherever and whenever we want and how much or how little we want.
Leitch (2006) views the natural resource of the 21st to be our people and that education is key to developing this resource. Institutions are made of people and ultimately none of the education reforms can happen unless our people (teachers, support, admin, managers and executives) engage with the new paradigm – we need to develop both the mindset and the tools. The mindset is Active, Flexible, Collaborative and Experimental. The tools are those of the NET, including web 2 and social networking. We need to begin using and experimenting with RSS, Tags, blogs, and groups for example to enable us to work across traditional boundaries and the boundaries of space and time.
My IT Crystal Ball is showing me all things Mobile and all things Web – has it got tunnel vision or is this really the future it is showing me?
I’m expecting to see a swing during the year towards user oriented personal computing.
Expect to see lots of very small and friendly devices with on-line web capability and touch screens. In particular I am expecting the rise of access away from the traditional computer (desktop and laptops) to WADs and smartphones.
Expect to see a rise in the development of the personal web through social networks, web 2 and personal information environments such as iGoogle.
WADs (Web Access Devices)
The thinner your WAD the better.
Easter looks like being a key period in 2008 with Menlow and large Solid State Discs becoming available (see hardware section below).
Expect to see increasingly slim, light units with good battery life, net access and touch screens. These devices may have local software but access to the web is where they will excel and cutting down the costs of local software will keep the WAD cost down. The browser will be the main feature – the operating system while important will become less significant. In this respect the Asus Eee from 2007 set the scene for WADs to come in 2008.
Apple are expected to announce a wafer thin Macbook in the January 15 Macworld for example – like the Iphone and many other Apple products this is likely to generate a lot of interest and could set the scene for the year.
We may see more of the
Iphone users comment that they prefer to access the web from their Iphones rather than their computers – it’s more convenient and faster (certainly by the time you start up a computer and get on-line) and will be even better with faster mobile internet speeds.
Expect to see a lot more people using their phones to access information – this is personal computing the way it should be.
Below are some anticipated hardware and software developments that have caught my attention.
45nm chip manufacturing and competition
Large Flash memories and Solid State Drives
I remember paying more than £1Mb a few years back – will 2008 be the year when we get to £1Gb?
Fuelled by consumer products (cameras, audio, phones etc) the Production of memory and Solid state drives is set to increase in 2008 with Toshiba among others are due to release 128Gb SSDs around March. Solid state drives are smaller, faster and consume less power than electro-mechanical traditional drives – meaning smaller, faster and longer lasting digital devices.
Around April 2008 Intel is due to release the 45nm Silverthorne CPU and Menlow UMPC architecture – Menlow is planned to consume half the power of current UMPC designs and just 25% of the older Celeron M architecture.
Touch and gesture
2007 was the year when Interface innovation outshone basic technology power – the Apple iPhone and Nintendo Wii are both less powerful than their competitors but stole the show. There is now talk about how the success of the mouse has hindered interface development – it seems just possible that the Wii and iPhone could herald a revolution in interface design. Touch interfaces seem likely to take off soon but gesture (like voice control) are likely to take a longer – I’m looking forward to the day when I can show my computer (running Vista) the finger and it understand what I mean.
Microsoft have been demonstrating their Surface for most of 2007 and due to actually release something in 2008 – most likely in the touch sensitive table format. However, Apple have worked at the personal end – the touch technology in the iPhone is probably the best loved feature of the iPhone and due to revolutionise interfaces. Like much of what Apple does the rest of the industry will follow- both Toshiba and Dell have both announce touch sensitive tablet for example.
I can’t really think of anything interesting to say about traditional software – we are expecting Windows 2008 server and SQL server 2008 this year – I’m just hoping that Microsoft can get Vista fixed. The importance of the operating system seems to be fading – this is perhaps why Vista is not being adopted as fast as previous new operating systems (remember the fanfare for windows 95). Expect to see more non Microsoft operating systems being used. Most Virtualisation suppliers have new products due for 2008 and the competition is hot – Virtualisation looks set to be huge for both servers and personal devices in 2008.
All the action seems to be on-line and on the Web – it’s here where development and “distribution” are faster. “The network is the Computer”. There will be battles of the giants, frantic acquisition activity and exciting unexpected development away from mainstream attention. This could be the year when Google’s infinite expansion succumbs to the real world and the rot sets in – it happens to them all.
The action on the web means that the software on your device will become less important – as long as it has a good browser. However, pervasive net access is not yet a reality so to address this issu Google (Gears ), Adobe ( Air ) and Microsoft (Silverlight) have developed systems for on-line/off-line application . Expect to see a lot more WADs “Web Access Devices” – small cheap units like the Asus Eee.
The Iphone Safari browser now sets the standard for mobile web access – a full and proper AJAX browser that the user can zoom in and out of instead of a special mobile version needing web side work to detect and deliver. Like WAP, special mobile browsers just won’t get developer attention.
Social networking reached something of critical mass in 2007 – suddenly we found figures like 80% of teenagers and 40% of adults use it. With such momentum behind it Social networking development will be big news in 2008 but with nearly everyone already using social networks I think the big stories will be about application development and interfaces – in a saturated market this will determine who steals who’s users, where the newbies go.
Like social networking Web 2 applications reached critical mass in 2007 but the application potential of Web 2 has only just begun. The news value of web 2 developments and how we use them won’t get the attention of social networking but the impact will be more fundamental. Expect a lot more use of collaborative-shared-web applications and development of web applications for just about anything you can think of and haven’t thought of yet.
Location based services
Location and geographic applications really took off in 2007. In the same way that few people seem to go to a printed encyclopaedia (I can’t spell it anymore) most people look up location data on Google Maps or Microsoft Live search. On-line location demonstrates MASH extremely well and shows how much more you can do on-line – for example you could bring up the map of your area, search for restaurants, compare menus and prices, get contact information, see if they are free, book a table and directions from where you are. With London hosting the next Olympics the market for location based services is set to go critical in London over the next few years again fuelling the development of mobile internet access.
Web 2 x Social networking and the PIE (Personal Information Environment)
More wish fulfilment than crystal ball gazing is that there will at some point be a good combination of social networking and Web 2. Currently web 2 sites like Zoho or Google don’t have the well developed communities of myspace or facebook . Currently social network sites don’t have the well developed applications of web 2 sites. Whoever can solve the puzzle and put the two parts together will have the whole market to expand into. It’s impossible to predict who will achieve this – for the big established players I’m thinking that there will have to be a merger or acquisition but Google might have a solution with Opensocial. We might however see a solution from anyone.
Developments like opensocial point to another solution – the Personal Information Environment (PIE) where the user can MASH together their own “portal” – a site the user builds to interact with the Internet – a place where the user pulls in the various social networks, applications and data feeds. We have seen signs of this in 2007 with the applications that allow Facebook users to interact with various systems and with sites like Pageflakes and iGoogle that allow users to create and customise a “home” page with RSS feeds and links.
My opinion is that the PIE offers the most interesting possibilities by providing a superset to Web 1, web 2 and social networking this could change the nature of the web as we know it -everyone will have their own interface and suppliers will focus on content and interfaces to it rather than presentation.
Web 3 (Semantics and AI)
AI and expert systems used to be a discrete subject for development but it has been “folded” into the mainstream and for over 10 years we have seen the piecemeal introduction of AI into software products (features that automatically tune operating systems and “help” the users of applications). As the amount of information available grows tools which get better content results and make better data connections will become more popular – tools like Hakia, Powerset and wikipedia-like efforts like Twine and Freebase may rise to challenge Google in the same way Google rose to challenge and overtake the established players like Yahoo and some years ago.
How far back shall we go – I’ve been around long enough to have experiences what could be described as “physical interfaces” like punched cards and paper tape and waited 24 hours to receive a fan fold print out of the results. In those days you had to try and get the program right first time as the time penalties for mistakes were severe. I remember the command driven interfaces of mainframe terminals, personal computers and applications like Wordstar – if you wanted to control the machine back then you had to do it on the machine’s terms and learn its language. I remember the menu driven screens of applications like Multiplan and Word and the relief of having the computer start to interact with me by offering some appropriate choices instead of me having to enter a command, worry about syntax and spelling and get some strange numbered error code to look up.
I remember the Apple Lisa, the Mouse and using Hypercard (unfortunately I didn’t get to use the Xerox Star). What a “tipping” point this was – a complete intuitive package where the computer interface was more on our terms. I remember the various Macs and versions of Windows through the years and have tried speech input and tablets.
However, the user interface is going to get shaken up again – in the way that the earliest interface s were rooted deeply on the computer side the next interfaces will be rooted deeply on the user side.
Personalisation is one of the hot topics on the web and is the key to the coming shake up in the user interface. Once a major element of a system was the user interface design, in the next generation this could be much less significant or nonexistent. Instead the focus will be on your information interfaces (APIs) the user, their personal environment or tools can connect to.
This is part of Web 2. If your system provides RSS feeds and API’s then the user can get information from your site without looking at your site – the user can take an RSS feed for a summary of updates, the user can mashup your content with other information and create something new. It is now increasingly popular to provide systems interfaces for social networking environments so a user could access your site via an application written by you or a third party via Facebook for example.
The Facebook programming interfaces have created quite a revolution on the web. This is one area where Google are behind and the response and developments of Google over the next 6 months will be interesting. Currently Google are working on a common social networking programming interface called opensocial that will work with most of the social networking sites apart from the big names of Facebook and Myspace it seems.
The development and spread of public APIs leads to the “programmable web” and the start of the evolution to Web 3 and what I call paradigm 3 where software agents interact automatically and exchange information. More on this later.
The traditional system user interface is disappearing – user’s will choose how to view your system – long live the user interface.
Monday 15th October
Emails were surprisingly light this morning and at 10.30am I managed to get on with some much needed hands on with systems. First up was trying to get ISA server 2006 operating as a simple proxy to authenticate web access against a domain using LDAP – initially wanted to keep the proxy outside of domain membership but no way could I get this working. Decided to try and simplify things and make it part of the domain and this allowed me to browse and select the domain but actual authentication is not happening – the ISA firewall seems to be interfering with RPC communications. Web help pages on ISA topics seem to revel in complexity when what I want is simplicity – finally decided to call an end to this, log a call with our support company, have my sandwiches, and get on with something else later.
Answered emails until 3pm then checked and changed the backup tapes and then in a desperate need to achieve some progress for the day I set up and configured 4 virtual servers in just over an hour – we now have 6 virtual servers on this system (2 quad core 2.66Ghz 16Gb RAM RAID 1 300Gb disk).
Tuesday 16th October (today)
Arrived at work at 8am and got straight into knocking of as many emails as possible so as to get some more much needed hands on with systems. Engineer for Goldmine installation arrived at 9.30am and we proceeded to install SQL server and the product onto one of our virtual servers. I Left the engineer with a colleague while I checked on things in the department. Discovered that we only had 2Gb left on a shared “pool” server – 300Gb once seemed quite a bit but no longer – especially when a leaving presentation recently soaked up 12Gb. This pool server and another one had been hovering around the 50Gb free mark for a few months and I had new server ready. I configured the new server (now with 1.4Tb of space – seems a lot now but in two years time this will probably not be enough), made some test restores to it and moved some large files across to free space on the current server. Next week is half-term – on Monday I will make the current server read only so that it can be used for reference whilst restoring 300Gb from tape to the new server (this will probably take a good 6 hours) and when finished the new server will take over.
My colleague reported that Goldmine installed and works as planned – another triumph for virtual servers.
I removed two old servers – both of which are now virtual – at last I can actually reduce the number of servers in the server room and still expand.
I Decided to try and achieve some progress for the day so thought I would install a newly delivered server to host more virtual servers. I installed this into one of the new spaces – this involves enduring the cold and noise in the server room, crawling in the confined spaces between the server racks to install the rails, heave the server onto the rails and then fiddle around amongst the mass of data and power cabling at the back of the racks. I was looking forward to seeing this very respectable “super server” as a big virtual host – I installed Enterprise server (for systems over 4Gb) and expected to see it’s 32Gb of memory but it would have none of that – it showed me just 3Gb. After a few reboots and some swearing decided I had reached a dead end and support from Dell would be needed. They would have none of that either – the server’s tag number is not yet on the support site so that is a dead end as well. This will have to wait then.
In order to get some progress for the day I decided to continue a recent background maintenance task on our Exchange email system. A major issue is dealing with accounts and mailboxes of people who have left – especially as they often leave and then reappear under different contracts shortly after so deleting them is a problem. I have been working on an answer – using a mailbox server to host disabled account mailboxes. I selected the last batch of disabled accounts and moved their mailboxes to mail stores on the disabled mailbox server – this is good as the mailstores on the “live” servers will use the freed space and this will stop them growing (shrinking mailstores is a nightmare). On the disabled mailbox server I have organised 4 mailstores – I will dismount one, wait 4 weeks and if no one complains then remove the mailstore – a safer way than just deleting the mailboxes and then attempting recovery.
This evening I have been test driving a UMPC (Vye) at last – the small size gives it a convenience factor that has to be used to be appreciated and it boots up and is ready to use in just 67 seconds (the convenience of these units reminds me of the convenience of wireless networks – difficult to appreciate this over wired until you try it). Problem is that I have just spent about 2 hours installing updates and security patches – this is Microsoft’s problem though – we really need XP SP3. I have had to set the screen on this tiny device (7.1in) to 800 x 600 just to read it – however for most pages this isn’t a problem at all. The device is good for wifi or wired access but 3G will have to happen via one of teh new 3G USB modems rather than built in circuit or PC card. I’m really looking forward to the HTC shift as this is built for wireless WAN ( built in HSPDA 3G and sim slot).
Given such a small device I also finally had a look at twitter – you can find me at http://twitter.com/timekord – I spelt timelord wrong but never mind I’ll pretend timekord means something.
Personal networks, power sources and computing
There is a tendency for equipment to become smaller and more mobile – to such an extent that in the near future the phrases “Personal Computing” and “embedded IT” could have entirely different meanings.
Research from different areas is coming together to suggest what person computing might be like in a few decades or possibly sooner with personal power sources, networks and computing.
One of the problems with mobile computing is power but research in bio-engineering suggests many intriguing ways to use the body as a power source and recent advances in reducing the power requirements of electronic devices could make it possible to run or charge the batteries of some of your gadgets from your own body. There are many ways to do this – Using, body heat, Kinetic energy and from Blood glucose among other methods.
Another problem is with communications and wires – especially as devices get smaller – why not use the electrical properties of your body to transmit information – IBM have been researching this since 1996 but Microsoft have a patent. Japan’s NTT DoCoMo is working on a Prototype mobile phone that can transmit information through the human body so that you can could in the future exchange information with other people and devices at the touch of your finger (or other parts of your body).
The last problem is with the portability and the size of our mobile devices. One answer is to build them into items we would normally use – like clothes for example. If you want to check out someone a bit extreme see the work of Steve Mann.
Looking at the way digital natives treat their mobile phones as an extension of self then it seems like the future human will be some form of human-mobile hybrid – where do I insert the SIM?
The “solutions” from the IT industry over the last decade haven’t appealed to me a great deal. I have found clustering and storage area networks (SAN) too complex – complexity can interfere with fast recovery and support. Blade servers offered little that was new – just packed server functions into a smaller and smaller space.
Virtual servers however offer something new and actually useful to computer users – no wonder IT people and the IT industry are raving about them.
In a sense operating systems like Linux and Windows provide virtual machines for applications. Applications like Word or Exchange for instance talk to the operating system rather than the hardware directly. The trouble is that applications these days bind quite tightly into the operating system and the operating system-application become like a kind of super package. Virtual machines provide a way of dealing with the operating system-application super package and let us treat it as if the whole lot were itself an application – see the Microsoft’s VHD catalogue for example.
I have been experimenting with the Microsoft Virtual Server 2005 R2 SP1 – this lets you get started on a familiar platform, is free and is really easy to use. I am planning a move to virtual servers for most of our application servers (e.g. finance, personnel, on-line testing), development servers, moderate use web servers and helpdesk systems. In fact I don’t see why all servers apart from really busy (email back ends) and really big (multimedia and file) shouldn’t be moved to virtual. I will be moving to virtual as fast as I am able and can anticipate replacing about 30 servers with about 5 or 6 virtual server hosts.
System Centre Virtual Machine Manager 2007 (SCVMM) – promises the amazing option of converting a real server to virtual by “pointing” a management screen at the real server – this will do wonders for moving all those difficult application servers where the only way to move them is to re-install the application on the target machine. I had tried the beta version of this in July but couldn’t get it working so look forward to another go with the “gold code”.
Also check out the amazing new way of evaluating new systems at Microsoft’s VHD catalogue. Instead of downloading an application and then installing it you can instead download a packaged environment as a virtual machine – an operating systems and application all configured for you – this is how I trialled the SCVMM. The only downside of this is that the virtual machine is a multi-Gb download but with the Microsoft download agent this only took 30 minutes on our college internet connection.
From the point of view of an IT manager this is why I like virtual servers
They save space
I am planning to run from 4 to 8 virtual servers per host server – that saves the space of 7 servers
They save power
Each server uses two loads of electricity – one load to run it and one load for the air conditioning units to cool it. Instead powering 8 servers I can now power one host server – saving the college money and saving the environment at the same time. The space saving also makes the cooling more efficient – another saving.
They offer new options for business continuity
Even with the relatively simple Microsoft Virtual Server a virtual server “image” (.vhd and .vmc files) can be stored on a standby host server and turned on in minutes if need be in a very simple operation. This is great for the multitude of application servers which don’t change and hold little data.
They offer new options for systems development
Instead of installing a new feature, messing up a server and then having to spend hours re-installing the whole server again with a virtual server you can just copy back the “image” and start again in about 15 minutes or discard state changes and carry on as if nothing happened.
They offer new options for business agility
New servers can be set up and tested in minutes rather than hours