New PC Laptop - Nearly Four Years Later

I purchased a Lenovo Y580 laptop three years, ten months ago. I blogged about that experience, and talked about its specs, particularly compared to the MacBook Pro of the time. I was able to put together a relatively powerful laptop for less than $1400 (Canadian). Of course, that was back when the Canadian dollar was pretty much at par with the US dollar.

I started to feel the "upgrade bug" this year, and had a pretty good feeling some technological features would come together to make it a good time to purchase my next computer. The Lenovo Y580 was my first gaming laptop, and it really was a (mostly) good experience, so I'm sticking with a laptop again this time around.

After looking at several models, I found the Asus GL702VM. This laptop ships in two versions; one with a 256GB SSD for $1599 US and another with just the 1TB hard drive for $1399US. You can pick up a 256GB SSD for much less than $200, so the cheaper model is a wiser choice. With an SSD added, the GL702VM is a higher price than the Y580 was 4 years ago (as I had it configured), but only slightly. Unfortunately, the Canadian dollar is only worth about 75 cents US, so in Canadian dollars there is a pretty large difference.

So, how big of a difference is there between these two mid-range laptops launched nearly four years apart? Well, there are a few noticeable technological improvements, and there are some surprisingly similar details that really highlight how technological advancements have started to slow to a crawl.

Processor

The processor in the Y580 is an Intel Core i7 3630QM. This is the third generation 2.4GHz quad-core, eight-thread Intel Core i processor. The GL702VM has a sixth generation Core i7 6700HQ, a 2.6GHz quad-core, eight-thread processor. PC Perspective ran comprehensive benchmarks comparing Intel Core processors from the second generation through the sixth. The improvements from the 3rd- to 6th-gen Core i7 processor range from 5% to 50%, but most falling around the 15% to 20% range. This assumes that the processors are running at the same speed, but the i7 6700HQ has a 200MHz advantage.

However, these improvements need to be put into context. The improvements PC Perspective saw in games was exaggerated by using a dual-video card setup that is not only known to be CPU intensive, but also more expensive than most gamers can afford. Some of the benchmarks are referred to as "synthetic" because they are directly measuring CPU performance and aren't actual programs that a user would run.

So, the real question is, does the new laptop "feel faster" in the day-to-day applications that I run? In short, no. I imagine I could get figures accurate to a fraction of a second, but I ran various tests simultaneously between the two laptops and then just observed how long a task would take. To try to make it a "fair fight", I reformatted the Y580 with a fresh install of Windows 10. Start up, shut down and rebooting times were virtually identical. Most programs launched slightly faster on the new Asus laptop, but generally this would mean less than two seconds difference. Application loading aside, the new laptop doesn't actually feel any faster for general use. There were some instances where the new laptop performed quite a bit better, but I am certain that it had little to do with the processor. I'll touch on those later.

Memory

From the start, I had configured the Y580 with 16GB of RAM running in dual channel (two 8GB memory modules). Although 8GB of RAM is normally sufficient for most users, I like to run virtual machines on my computer, so more RAM is definitely better. What about the GL702VM? It has one 16GB RAM module. Of course there has been a transition to a newer type of memory (DDR3 to DDR4), but there are few applications that benefit from slightly faster memory in any significant way. In fact, the total memory bandwidth will probably be slightly lower on the new laptop since the new DDR4 memory is only running in single channel compared to the dual channel configuration of the DDR3 memory.

So, after nearly four years, I have the exact same amount of RAM in my new laptop.

Storage

Solid state drives (SSDs) have been rapidly gaining in popularity over the last several years. Even in 2012, I knew that any system (laptop or otherwise) that I would get would have to have an SSD. For years, traditional hard drives were the primary storage device in computers. As processors, RAM, graphics chips, and virtually every other component in a computer were getting significantly faster every year, hard drives were only getting incrementally faster. Hard drives have a way of making even the most powerful computer slow to a crawl, especially while starting up or launching an application.

So, when I ordered the Y580 I added in a 256GB SSD. That SSD is still working great today and uses an interface known as SATA (mSATA, SATA III to be precise). When ordering the GL702VM, I managed to get a really good pre-order price on the model with the 256GB SSD. Despite Asus' own product page bragging about the new, faster NVMe SSD interface, the SSD they pre-install is still based on SATA. Although it uses the same interface as the older SSD, the new SSD is definitely faster. When dealing with large programs, such as games, the new laptop would finish loading noticeably sooner. As an example, the graphics benchmark Unigine Valley finished loading roughly 8 seconds faster on the GL702VM.

While SSD's are fast, they are quite a bit more expensive than traditional hard drives. If you want to store lots of files (pictures, videos, etc), you are probably going to still need a hard drive. The Y580 came with a 1TB 5400rpm hard drive. The GL702VM ships with a 1TB 7200rpm hard drive, so it is technically faster, but it makes little difference in how fast the laptop feels.

One other noticeable difference between the two laptops is something that the new laptop lacks; optical storage. The Y580 included a Blu-ray reader. The lack of an optical drive in a new laptop is hardly a surprise today. I tend to be more surprised by how many laptops still include one. There is an advantage to not having an optical drive as well; it helps keep the weight and size of the laptop down. This new, more powerful laptop is actually slightly lighter than the old one (2.7 kg vs 2.8 kg), and is quite a bit thinner (24mm vs 36mm).

Again, like with the RAM, I am getting a laptop with the exact same amount of storage, and drives that are only slightly faster than I have been using for nearly four years.

GPU (Graphics Chip)

Right off the bat I will say that this is one area where the new laptop has definitely improved in a huge way. The new GTX 1060 graphics processing unit (GPU) has over three times as many graphics "cores" that are two generations newer than the GTX 660m found in my old laptop. Those cores in the GTX 1060 also run at clock speeds nearly double those found in the GTX 660m. There is three times as much graphics memory on the GTX 1060 (6GB vs 2GB), and that memory has three times as much bandwidth as the GTX 660m (192GB/s vs 64GB/s). When it comes to graphics intensive applications, like games, there is no comparison between the new and old laptops.

The GL702VM is able to play any modern game with relative ease. The Y580 can play those same games, but many graphical details must be scaled back and/or the game must be run at a lower resolution. Finally, this is one area where four years has made a huge difference.

Display

I have to admit that I didn't need a new laptop; not really anyway. The Y580 was only starting to show its age when it came to the latest video games with high-end system requirements. I could have still played those games; they just wouldn't have looked as good. There is one reason that I needed to replace my laptop. At 45 years old, I have started to notice that a 15.6 inch 1080 display is getting to be just a little too small. This time I went a little bigger; the GL702VM has a 17.3 inch 1080 display.

Remarkably, the new laptop is barely larger than the 15.6" Y580. The bezels around the display are actually quite small. The GL702VM is only about 30mm wider, and 25mm deeper than the Y580.

Other than the size advantage, the new display is capable of refreshing at up to 75Hz. Many laptops (including the Y580) have a refresh rate of 60Hz. I say "up to 75Hz" because the new laptop includes a technology called G-Sync. This is an important technology for gaming. There are much better descriptions of G-Sync out there, but in short it lets the laptop refresh the display when a new full frame is ready rather than at a fixed rate. With fixed refresh rates, games suffer from one of two issues (the user chooses which). You can either tell the graphics chip to draw the frames whenever they're ready, or tell it to synchronize fully drawn frames with the refresh rate. The former leads to an issue known as tearing; the top portion of the screen has part of one frame, while the bottom of the screen is showing part of the following frame. The latter choice causes stutter; the graphics chip may not be finished drawing a full screen when the display is ready, so it waits for the next refresh cycle.

The new display uses an IPS panel compared to the old laptop's TN panel. IPS panels have much better colours and brightness, and wider viewing angles.

Other New/Cool Stuff

Although there are some aspects of the new laptop that fail to impress when compared to my old laptop, there are definitely some cool technologies that I am excited about.

The first is the Thunderbolt 3 port. This port uses a USB Type-C connector (which is becoming more common every month), but is quite a bit faster than a regular USB 3.1 port (40Gbps vs 10Gbps). One of the most exciting Thunderbolt 3 accessories is an external GPU dock. This makes it possible to add a full desktop graphics card to the laptop down the road. Hopefully the new laptop is as good to me as the Y580, and it should last me even longer than 4 years.

Another technology that is "under the hood" is the ability to use an NVMe SSD. NVMe SSDs are quite a bit faster than (the already fast) SATA-based SSDs. Again, this is a technology that should help keep this laptop feeling quick for several years.

One final reason to get excited about the GL702VM is that it is VR capable and meets the requirements of the HTC Vive and Oculus Rift VR headsets. Of course, these headsets are quite expensive right now, but at least I know the laptop will be able to handle VR.

Mainstreaming VR

Two of the major players in the world of VR tech are the HTC Vive and the Oculus Rift. These two VR headsets may take slightly different approaches to creating immersive VR, but they both approach the 3D aspect of it in very similar ways, and have nearly identical system requirements. The system requirements themselves are quite steep, especially when it comes to the chip in the PC used to process graphics.

In the last month, the two major graphics chip makers have released new graphics cards for PCs that help make VR slightly more affordable. At the end of June, AMD released the $200 (US) Radeon RX 480 graphics card, and in mid-July Nvidia released the $249 (US) Geforce GTX 1060.

What makes these cards so important to VR is the price. Prior to the launch of these cards, the cheapest video card that met the requirements of the Vive and Rift would typically have cost the consumer well over $300. Reducing the "cost of entry" to any technology by $50 to $200 is a great step. In March, one site priced out a system that met the minimum system requirements for VR, and it totaled $939. The video card used in that build was $309, and there are now Radeon RX480 cards priced at $199. That drops the total PC build price by nearly 12%!

Recent rumours point to the Nvidia GTX 1060 making its way into laptops, mostly unchanged from the chip in the desktop video cards. In the past, Nvidia has launched "M" (mobile) versions of their graphics chips that were significantly different from their desktop parts. Laptops based on Nvidia's past x60 graphics chips could often be found for sale in the $1000 to $1200 range. The prices on laptops using Nvidia's previous generation of chips (the GTX 960) can be found quite a bit lower right now. That is a good sign that laptops using the new generation chips will likely be available soon. If the rumours about the mobile GTX 1060 are true, it is possible that fully VR capable laptops may be available for under $1200 in the very near future.

Unfortunately, one of the biggest barriers to VR headsets being mainstream is the cost of the headsets themselves. The Oculus Rift is $600 (US), while the Vive is an even heftier $800. When considering this as part of the total VR system, a price drop of $200 won't help get VR into the mainstream. That's OK for now. Perhaps by the time the headsets see a significant price drop, there will actually be apps, content, and games available that make it worth using VR.

Or maybe VR is the next 3D TV; a lot of hype that just fizzles out.

The best thing about the iOS 9.3 release

Anyone even remotely familiar with Apple knows that, alongside the new iPad and iPhone, there is a new version of iOS. The new software includes some great new features and updates, and at least one major feature that schools have been wanting for a long time.

There is a new Night Shift mode that may help users get better sleep. Notes can now be locked (passcode or Touch ID). Stills can now be extracted from Live Photos. There is even multi-user support for schools with shared iPads!

The multi-user support is something that we will want to explore as soon as possible. I know there are many schools that have wanted this feature for a long time. I still haven't had a chance to investigate exactly how it works, or how it impacts the deployment process, but that's where the actual best thing about the iOS release comes in. It's the timing.

Major iOS updates are typically delivered with the release of the new flagship iPhone. The problem is that, since the iPhone 4S, that has happened in the fall, just after all of a school's iPads have been deployed. Administrators are left scrambling trying to figure out the impact of the new software. Worse, Apple has made it easy for the user to go ahead and update devices, even if the administrator doesn't yet know the impact of (problems with) the update.

Now, I know this is just a "point release", and I know that iOS 10 (X?) will probably still be released in the fall. I'm just glad that this update, with such a major feature for schools, isn't landing after a new school year has just barely begun. Sure, May or June would be best, but September and October are probably the worst possible time for a new iOS release.

Training Challenges in the North

Iqaluit, Nunavut.

In February.

When I was first asked if I would be available to provide SMART Notebook training to teachers in Iqaluit, my main concern was that I did not have the gear to handle Canada's far north in the middle of winter. Sure, I had a parka, some gloves, and boots. That isn't uncommon for Canadians.

But there's a pretty big difference between winter in southern Ontario and northern Canada.

As it turns out, the weather wasn't nearly the challenge I thought it would be (even though my flight out did get cancelled due to a blizzard). I picked up some better boots, better mittens, a balaclava, and some snow pants, and ended up walking around quite a bit while in Iqaluit. It was a great experience, and I only fell through the snow once!

The real challenge of Iqaluit, from an educational technology training perspective, is the state of the Internet.

The Internet speed at the hotel would lead me to click on a web link, walk away to do something else, and come back to the computer a couple of minutes later. The speed at the school wasn't any better. In fact, the school Internet was further impacted by the government filters. I have to wonder how long it will take for officials to realize that the filters are increasingly ineffective, especially as students begin to bring their own data-enabled devices into the classroom. The filters also end up blocking useful teaching tools and valuable information (some of the SMART-related resources appeared to be blocked). SMART Response worked, but not particularly well and would not be usable for more than a handful of questions. To SMART's credit, the question web pages are actually quite small. Unfortunately, the school's Internet connection is so slow, the question pages would still take up to a minute to load on student devices. There is another delay between the student clicking to submit a response, and the response being "received" by the teacher.

Surprisingly, SMART Maestro, the iPad-enabled feature of Notebook, ran smoothly. This must mean that most or all of the network traffic required to mirror the SMART Board to the iPad must stay on the local network.



On my third day of training, I asked the teachers what their strategies were for integrating Internet-based materials into the classroom. In unison, several teachers replied, "We don't". This may seem like a shocking response in the 21st century, but it isn't a surprise once you've tried using the Internet in the school for a few days.

So, the solution could be to pre-download resources from home. The teachers did comment that their Internet speed at home was quite a bit better than the Internet at the school. This was a solution used to a limited degree by some teachers, but there was another problem. It seems that the best deal for Internet in Iqaluit only includes roughly 40GB of monthly data, and each additional GB is $15! Ouch! I can barely stay below my 275GB monthly allotment and have considered paying the additional $10/month to get unlimited bandwidth. That's great for me, but there is clearly a problem with "Internet equity" in Canada.

The CRTC is currently soliciting input on broadband connectivity in Canada. The completed questionnaires must be submitted by February 29, 2016, so go participate as soon as possible (but please just read a little further first).

Before you respond to that poll, just take a few moments. Forget about Netflix. Forget about iTunes. Think about your own child not having access to the Internet to research a school subject. Consider that other students across the country have relatively easy access to resources like Homework Help, Khan Academy, and a variety of other online learning resources. Many school districts are moving to Google Apps or Office 365, tools that help enable collaboration and 21st century skills. From what I experienced in Iqaluit, these tools would be virtually unusable.

Apple, FBI, ISIS, and Secrets

This goes significantly off-topic from what I normally talk about, but still revolves around technology (and even touches on the potential impact on education). The news about the FBI demanding that Apple unlock the phone of one of the San Bernardino gunmen is everywhere, and the FBI using the suffering of the victims' families to get what they want is not only immoral, it is irresponsible.

One of the most common arguments that comes up regarding encryption and secrets is that if you aren't doing anything wrong, you have nothing to hide. This could not be further from the truth. Many businesses around the world depend on trade secrets, or keeping secret the development and progress of new products and technology. Law enforcement agencies may be protecting the identities of undercover agents, witnesses, or victims. You know, agencies like, say, the FBI. Can you say you have nothing to hide while still demanding answers about the breaches in security at Target, Neiman Marcus, and Michaels? More in line with education, schools and districts must also be sure they are keeping student information secure and private. This is not just something that should be done, but something that must be done. We all have "something to hide", even if we're not doing anything wrong.

The FBI is claiming they hope to discover information on the phone; information that will help prevent other terror attacks. This is highly unlikely, and the FBI knows it. The San Bernardino gunmen were a man and a woman. Islamic extremists (ISIS, Taliban) do not use women as "soldiers". This act of terror appears to have been "ISIS-inpired", but that is very different from "ISIS-plotted". The FBI can get access to phone records, even without access to the phone. They likely already have a good idea who the gunmen were in contact with, and there is little else they could discover from the phone itself.

Asking Apple to try to create a method to circumvent security measures puts far more people at risk than any possible gain from unlocking this one phone.

There is a belief that the burden on, or cost to, Apple to circumvent the security of the phone is relatively small because they are such a large and wealthy company. Again, this could not be further from the truth. If Apple is successful in gaining access to the phone, it calls into question, at least from the perspective of the public, the actual security of Apple's products. Apple could potentially lose contracts for large-scale deployments to government agencies, businesses, and yes, even school districts. The public perception of ineffective security could also cost Apple consumer sales. There are so many costs that go beyond the simple costs related to the hours required for Apple's developers to gain access to the phone's contents.

There isn't anything the FBI can do to bring back the victims of the attack, and it is disturbing that they are using the grief of the victims' families to advance some hidden and unrelated agenda.

Waiting on the next big thing

After recording the podcast following FETC this year, our group pondered why we didn't really see any major new technology.

I suggested that it might be related to the difficulties the major processor fabrication companies are having shrinking the chips used in our electronics. I quickly realized that this was a topic that my colleagues really had little knowledge of, and that most users of technology probably don't know much about the chips inside the gadgets we use every day.

This post is not intended to be an in-depth technical discussion. Hopefully I can provide a simple explanation of how our electronics have managed to get faster and do more things over the years, and give a quick overview of what is causing a slowdown in some areas of technology.

In 2006 Intel introduced the Core architecture of processors. These processors were manufactured on what Intel referred to as a 65nm (a nanometer is one-billionth of a meter) process. The 65nm process had also been used in the later Pentium 4 processors. 65nm represents a measure of the process, but some "features" in the process are larger than 65nm while others can be smaller.

Late in 2007, Intel began producing processors on a 45nm process. While some might interpret this as being roughly 70% of 65nm, processors are generally rectangular and have area. This means that the 45nm process can create an identical chip in roughly 48% of the space used by the 65nm process (45^2 / 65^2 = 47.92...). The scaling isn't quite perfect, so the chips don't shrink by the same amount as the process naming implies. Still, you can see that chip manufacturers can pack a whole lot more transistors into the same amount of space used by the older process. Reduced size is not the only advantage to new, smaller processes; smaller processes use less power and generate less heat. The reduced size also normally means that a chip as complex as "last year's" high-end chip can be produced at a lower cost.

In early 2010, just over two years after introducing the 45nm process, Intel released chips produced on the 32nm process (roughly 50% in size compared to 45nm). In mid-2012, Intel had started using a 22nm process (roughly 47% in size compared to 32nm). The first sign of trouble was with chips from Intel being produced at 14nm (40% of 22nm). Intel released a very limited number of 14nm chips, targeted mainly at low power laptops. Higher powered 14nm desktop and laptop chips did not show up until 2015. Intel's roadmap also now shows that products based on their next process (10nm) is not due until late 2017.

Intel is not the only chip-making company around. Other big players include TSMC and Samsung. Despite the public disputes between Apple and Samsung, the processors in most iPhones have actually been manufactured by Samsung. The latest iPhones have started using chips manufactured by TSMC. Samsung and TSMC have also started to struggle to make chips smaller. Some rumours suggested that with the iPhone 6 (and 6 Plus), Apple was taking so much (of a limited) capacity from TSMC that other tech companies could not get access to the latest process. AMD and Nvidia are the two major graphics chip designers, and have their graphics chips manufactured primarily by TSMC. Neither company released graphics chips using TSMC's 20nm process.

Limiting the latest and greatest manufacturing technologies to a handful of companies means that only those companies have the potential to make noticeable improvements, but they may not be under pressure to do so. Apple seems to have capitalized on their nearly exclusive access to TSMC's advanced process. Benchmarks for the iPhone 6, and again with the 6S models, showed significant improvements in performance. Note that the iPhone is under competitive pressure from Android smartphones. Intel on the other hand faces little competition in their primary market of computer processors. Intel not only designs the processors, but also owns the manufacturing facilities for their processors. The performance improvements in processors from Intel have been relatively small (5-10% from generation to generation).

What about technology other than smartphones and computer processors?

We are starting to hear more about VR (virtual reality) and AR (augmented reality). Oculus, probably the most recognizable name in VR, announced the system requirements for the Rift VR headset. The cost of building a system to meet those requirements is quite high. Here is a quote from that page, highlighting the importance of the GPU (Graphics Processing Unit).
Today, that system’s specification is largely driven by the requirements of VR graphics. To start with, VR lets you see graphics like never before. Good stereo VR with positional tracking directly drives your perceptual system in a way that a flat monitor can’t. As a consequence, rendering techniques and quality matter more than ever before, as things that are imperceivable on a traditional monitor suddenly make all the difference when experienced in VR. Therefore, VR increases the value of GPU performance.
Remember that AMD and Nvidia are the major source of graphics chips, and that they likely did not get access to 20nm? Relatively few computers meet the graphics requirements of the Rift.

Other areas of technology may also have been stalled by limited access to the newest chip manufacturing processes. Nvidia makes the chips in the tablet for Google's Project Tango, a computer-vision platform for detecting objects (think self-driving cars). This technology is relevant for robotics, a topic I discussed in the podcast.

While the trend toward a slowing in technological advancement continues, more companies are finally getting access to the latest manufacturing processes. AMD and Nvidia are planning products based upon 14nm and 16nm for release in 2016. AMD has stated that their upcoming graphics chips will make the largest leap in performance per watt in the history of the Radeon brand (AMD's primary graphics brand, introduced in 2000).

Hopefully this means we will see some new and really interesting tech at conferences next year.

Reflections on FETC 2016

This was my fourth trip to Orlando to attend FETC, and there were some notable differences from previous years. Our group was significantly larger than in previous years, and included faculty, staff, masters students, a PhD student, and representatives from companies that work closely with us. We wrapped up FETC with a brief podcast. I will expand on my comments in that recording, and talk about a some other things I noticed at FETC 2016.

When talking about the conference itself, the layout and size were noticeably different. The exhibit hall stretched from north to south, with the keynote area at the "back" of the convention center. The exhibit area was definitely smaller than it had been in previous years, but still large enough to keep attendees busy exploring booths.

As noted by my colleagues in the podcast, there wasn't much that was particularly revolutionary or innovative to be found at FETC. This seems to be a reflection of the market in general. We all seem to be waiting for the next "big thing".

While not exactly new, this seemed to be the year of the robot and maker spaces. I was particularly intrigued by Ozobot. I believe this is a great way to introduce young children to basic coding skills. The Ozobot will follow a path drawn out by magic markers, and simple instructions can be given to the Ozobot by simply alternating the colours drawn along the path. While a great implementation, I believe there are two challenges. First, what is the next step after the Ozobot? Once a child has mastered the instructions and "played", the Ozobot itself cannot go beyond its very basic programming. Second, the price tag of $50 USD is quite steep for such a simple robot that likely won't see much classroom time. A class set of 18 is $1000, which is not really a deal at all. Some extras are thrown in, but you give up the value of 2 Ozobots to get the extras. If the Ozobot was $20 USD, with a 25-unit bundle (with extras) at $500, I would be more excited.

Sessions and conversations around maker spaces almost always include, or even focus on, the topic of 3D printing. There were a few booths showcasing 3D printers, but it is interesting that none were from the "big players" (Epson, Canon, HP, etc). It does lead to concern about acquiring a device from a company that might not be around next year.

One "throw back" at FETC was typing instruction. There were several booths focusing on teaching typing skills. I have been told that this is a response to poor results in online tests where students that know the content are still doing poorly because they cannot type quickly enough to finish on time. I imagine these skills are also valuable for collaborative work on Google Docs or Office 365.

I have still been considering the question about what I hope or expect to see in the future for educational technology. Other recent events, including CES, showcased quite a bit in the VR/AR (virtual reality/augmented reality) space. I only saw a little of this at FETC. I know the system requirements for Oculus Rift are fairly demanding, and it is also very expensive. If that was the only option, I would understand why it didn't make an appearance at FETC, but Google Cardboard seems a reasonable choice for VR in the classroom. Hopefully we see more immersive and interactive uses of Cardboard soon.

Remote Student Participation

On Wednesday we learned that one of our students would need to participate in classes remotely. Starting Monday.

Of course the first suggestion volunteered to me was, "Can't we just Skype the student in?" Our classes are not standard university undergraduate lectures. Our instructors are typically modelling the K-12 classroom. They move around quite a bit, and the students participate in small group activities. Skype running on a stationary device was not going to work.

I had a pretty good idea that what I really wanted was a VGo, but there was no way we were getting the funds for that. Even if, by some miracle, we managed to convince "the powers" to buy a VGo, it was virtually impossible that the convincing, purchasing, delivery, and setup would happen before Monday morning.

A couple of years ago I discovered Swivl at an Ed Tech conference (I honestly can't remember which one). I encouraged our Instructional Resource Centre to purchase a couple of them for use by our students for their micro-teaching videos. The students record themselves delivering a lesson activity, and then review it to evaluate and adjust their teaching methods. The students would often setup cameras on tripods, or ask another student to do the recording. Neither method was ideal. A tripod did not allow the student to move around, and audio was troublesome in both scenarios.

With Swivl, the "teacher" wears a wireless tracker (with integrated microphone), and the Swivl base turns and pivots to follow the tracker. The recording device (typically a smartphone or tablet) sits on the base. A single, short audio cable connects the base to the device to record the audio from the mic integrated into the tracker. It really is impressive in its simplicity, and works quite well.

The problem is that Swivl's primary use and design is around recording lesson activities, not video conferencing. The Swivl base connects to the recording device using a male-to-male, 4-segment 3.5mm cable. This is a fairly standard plug found in pretty much every smartphone and tablet. It carries both the mic-in and audio out. Unfortunately, this cable runs directly from the Swivl base to the device, with no splitter or plug in the base for the audio out.

Our initial tests using Lifesize Video (the standard video conferencing solution used by our university) and an iPad confirmed that audio was being recorded from the mic in the tracker, but no audio would play back unless the base from the cable was unplugged from the iPad.

We decided to try a 3.5mm 4-segment to 2 x 3.5mm 3-segment splitter.

Adapter to break out the mic in and audio out connections
We actually had to use two of these adapters. One was used to convert the 4-segment mic out from the Swivl base to a standard 3-segment mic line. The second was connected to the iPad allowing us to plug in the mic from the Swivl base, and a set of external speakers.


Swivl video conference cart
Our Swivl telepresence setup

With everything plugged in, we started a Lifesize Video session and everything worked! The final bit was putting everything on a cart that could be easily moved between classes, taping together some of the cabling (to try to prevent instructors/students from unplugging cables from splitters), zip-tying some of the cables to tidy it up, and labeling plugs that couldn't easily be taped in place ("to iPad").

It would be nice to have the cart completely wireless, but we settled on a single power cord. The Swivl has a 4-hour battery life (estimate), and the student has back-to-back classes that total 5 hours. We also didn't have battery-powered speakers.

It would also be better if the remote student could control the direction of the Swivl rather than relying on the tracker, especially during the small group sessions. This is a feature of Swivl Cloud Live. Swivl Cloud Live is in beta, and I did submit the form to sign up. I see more experimenting in the next couple of weeks.

Friday morning we conducted a test session with the student and all went well. The first class is Monday morning. Fingers crossed.