The Convergence of Mobile Phones and Desktop Computers

I’ve been telling people for over 15 years that someday we’ll have smartphones that will transform into desktop computers. It’s finally actually starting to happen with feature’s like Microsoft’s Continuum.

My initial thought was that you’d simply place your phone next to an external keyboard, mouse, and monitor then they would all automatically and wirelessly connect to the phone.

This will happen but now there’s another far more interesting possibility. Phones, always with you, simply innovate beyond what you can do with a desktop.

Consider technologies like Google Glass, Microsoft’s HoloLens, or Magic Leap will be the display, providing the large and augmented view.


For enhanced user input, a combination of voice and gesture recognition will allow people to make more use of our phones while on the go. Minority Report is 14 years old. Intel and Google, along with manufacturers of VR headsets. are making rapid progress on recognizing hand gestures.

This Google Soli video demonstrates the emerging possibilities:

Over a billion smartphones are sold every year, generating hundreds of billions in profits. The revenue is being used to fund a smartphone arms race to build fresh and innovative mobile technologies (e.g. Apple Pencil, 3d Touch, TouchId).

Individually, each of these technologies might seem minor, but in aggregate they’ll make the smartphone of today appear antiquated within a few years. After all, companies need to give you a reason to update every few years.

All of these enhancements will give you fewer reasons to sit down at your desktop or reach for your laptop.  Personal computers will soon be the pickup trucks of computing.

Should I Use Objective C or Swift for Writing iOS Apps?

Simply look at these code snippets:

@import UIKit; // Other imports below
#import “ViewController1.h”
#import “ViewController2.h”
#import “MyDataModel.h”
#import “NoLongerUsed.h”

NSString *s = @”Swift is the future”;
UIViewController *vc = [[UIViewController alloc] init];
UILabel *label1 = [[UILabel alloc] init];
UIButton *button1 = [[UIButton alloc] init];
NSArray *names = @[@”John”, @”Paul”, @”George”, @”Ringo”];
NSDictionary *ages = @{@”John”: @(1940), @”Paul”: @(1942), @”George”: @(1943), @”Ringo”: @(1940)};


import UIKit // No other imports needed

let s = “Swift is the future”
let vc = UIViewController()
let label1 = UILabel()
let button1 = UIButton()
let names = [“John”, “Paul”, “George”, “Ringo”]
let ages = [“John”: 1940, “Paul”: 1942, “George”: 1943, “Ringo”: 1940]

Swift is less visually noisy. Now, imagine 100,000 of code of each language. Which is more maintainable?

Less code and more maintainable code directly translates to a cost savings. The only valid argument developers have is that Swift is still an evolving language so the source will break, at least in the next version. I claim that you’re still better off writing in Swift and fixing any breaking changes than writing in Objective C. You will have done less work and your code will be safer and more maintainable.

By the way, if you want to learn Swift, there’s a lot of information available: books, blogs, and other reference material.