My Twitter Account is under Subpoena by the US Government

…Along with 637,000 other accounts that subscribe to WikiLeaks. WikiLeaks is who I am following, but if you follow Julian Assange, Bradley Manning, Rop Gongrijp or Brigitta Jonsdottir among a few others you are a part of the same subpoena. Among the information being sought is our IP addresses, mailing addresses, phone numbers, banking information, credit card information, etc.

This all really began with the subpoena of Jonsdottir’s Twitter account. Jonsdottir is an Icelandic parliamentarian who was involved with the WikiLeaks video release of the US Apache helicopter shooting in Iraq, where two Reuters reporters were killed, and children wounded.

To quote Brigitta Jonsdottir herself: "I think I am being given a message, almost like someone breathing in a phone." I love that.

Oh what can I even say about this? Apparently it’s too late to object if I haven’t already. I’m not even sure if it would matter anyway as I am not a US citizen. It makes me want to laugh hysterically and go on a rant suitable for a raving lunatic all at the same time.

The only civil reaction I can think of is to point out the obvious: These are the tendencies of a paranoid and delusional person. Except that we’re speaking of a governing body.

I really thought that all the wiretapping and cellphone listening would be a thing of the past once Bush was gone, but the scale has only increased to an international level.

If this was a schoolyard, I’d say that a multi-student brawl was about to be called at the sounding of the end-of-class bell.

I don’t think I need to explain the analogy.

Advertisements
Posted in Current Events | Tagged , , , , | Leave a comment

New Year Prediction: The Rebel Forces Strike Back!

Last year was was an interesting, if not alarming year concerning free speech and net neutrality. The big subject to watch of course is the ongoing controversy surrounding Julian Assange and WikiLeaks. The mistake here is to assume that this doesn’t affect anyone but Mr. Assange and the folks at WikiLeaks other than providing an interesting good guys vs bad guys story. This could potentially affect us all.

Keep in mind that it is the US that is calling for his head and neither Julian himself nor the website servers are in the US. And considering what happened to the Pirate Bay guys, it’s a good guess either way whether he will be strong-armed into a US courtroom. (I know that they were prosecuted in Sweden, but it was backed completely by the RIAA and the MPAA and it was US law that they were prosecuted with). On the one hand, the Pirate Bay was actually breaking US law, although they were in a country where they were breaking no laws. Yet they still were brought up on US charges and sentenced to prison and ridiculous fines.

So how far can the US go? Will they be able to sentence Julian Assange in the same way even though he has not even broken a US law? Common sense would say no, but these are strange times. The alarmist side of me keeps saying "we are seeing another repeat in history: the Roman Empire is reborn!"

There are some cases where the US determines that they don’t even need to go to trial. They just do as they please! Consider the 80 or so banned domains that occurred in November: No proof that those sites were breaking any laws, no trial, just the outright theft of the domain names.

Enter the Rebel Alliance and the Dot-P2P DNS system! Fittingly, the idea sprung from a tweet by TPB founder Peter Sunde. For those not in the know, the idea is to create a decentralized DNS system using existing P2P technology, and thus avoid relying on the US government-usurped ICANN servers. If it catches on, seizing domains will be an impossibility for the US or any other government.

What’s interesting here is that the US is beginning to learn that their power, at least on the Internet’s battlefield, is being eroded. WikiLeaks is showing the world (and US citizens) the corruption of the US government, and soon the .p2p DNS will ensure that the information remains free in the wilds. The US is being backed into a corner that has only 2 options. Option number one is not an option that the US is famous for: surrender and change their ways. The second and more characteristic option is to choose an enemy and lash out. I think it will come to no surprise that the US will pick option number 2, but who will be the chosen enemy? Likely the first step will be to outlaw most of the information found on WikiLeaks and start prosecuting those hosting mirrors. Once .p2p DNS takes off (which I think it will) that will also be outlawed and individual users of the system will be witch-hunted ala the RIAA witch-hunts.

Another possibility here is a joint effort between Microsoft and the US government to keep the .p2p DNS system from running on Windows machines.

Does this seem far-fetched to you? Maybe, but so does the US deciding that they can seize property without a trial, and prosecute those in other countries using US law. At this point I think it is a mistake to believe that the US will even abide by its own laws to accomplish a goal.

The Internet will remain in the hands of the users this year, but in doing so a lion is being threatened with declawing. Make no mistake, it will attack. At first it will make use of the most convenient target, and right now that is Julian Assange and WikiLeaks. The US will do everything in its power to prosecute Assange. Once that media-circus is finished however, it will focus on fresh prey.

I know that this post is alarmist to the point of being ridiculous, but with the lawless actions of the US government this year, I’m not convinced that an alarmist point of view is the incorrect point of view.

Posted in Computers, Current Events | Tagged , , , , , , , , , , , , , , , | Leave a comment

Google Chrome OS Tentatively Released

Google’s Chrome OS has been tentatively released in a "pilot program" for testers who will receive a Cr-48 Chrome Notebook. To receive the notebook, one must apply to beta–err pilot test the forthcoming operating system.

Google has apparently already signed deals with Acer and Samsung to release laptops in 2011 with Chrome OS pre-installed, and I believe that the Cr-48 will be commercially available as well. That will more than likely be the Cr-49 or 50, after the testing has been completed, but you get the idea.

So as you probably know, Chrome OS is a cloud-based OS. in other words, it’s really no more than a self-contained Internet browser. It’s designed purely for people that use their computers exclusively for internet access. Any applications that need to be run will be cloud-based apps accessed through the browser.

Ignoring my previously stated skepticism of cloud-computing and Google’s cloud-based apps, my question here is will there be a market for this OS and more specifically, a market for the coming Chrome OS notebooks? Because the OS is a Linux distro, will it have any impact on Linux and the Linux community?

Well first of all, I don’t see it being widely adopted as a desktop OS. It’s not designed for that. There are many cloud-based OSes out there already and they are not being used on the desktop. Again, that is not the design of the software. That leaves it for laptops and netbooks. But I think this is a mistake. I don’t see netbooks being around for much longer and laptops need a complete operating system, or they’re just a waste of space and money. If the OS was redesigned for a tablet that would be something else. Cloud computing on a tablet I can understand. A tablet doesn’t need any more power than to make some quick adjustments to a document while on the go, or a quick status update to friends or an employer as you head across town to a meeting (and maybe a video or two on a flight).

That being said, I don’t think that there are enough cloud apps available to justify the OS operating completely through a web browser. And the apps that are available are too limited for any kind of professional work. As an illustration, lets take a look at the Chrome Web Store. If you are using the Chrome or Chromium web browser, you can try this out for yourself.

I won’t go into the pros and cons of the store itself (there is a lot I could say to criticize it); I’ll just talk about the apps available. In the productivity menu (we’re assuming professional use here) let’s look for an image editing application. Right now a featured app is the "Advanced Image Editor by Aviary". After selecting it, we are prompted to install it. The "installation" brings us to a web page: http://www.aviary.com/online/image-editor?lang=en#&src=chromeos. I am truly surprised at the features it has. There are several filters and tools available, and you can even work in layers. That is very good for a web application. But as a professional solution? It will never replace GIMP’s functionality and is light years behind Photoshop. I’m really not even sure if it’s decent for a hobbyist. Why use this when GIMP is free? For those who want only to make quick edits for uploading to Facebook, fine, but in no way is this a professional solution.

The same website (Aviary.com) has another link from the app store for music creation. And again I’m surprised at the sophistication in a web application. This one had some minor issues (drag and dropping instruments into tracks was extremely sluggish), but on the whole I have to say nicely done. There is a wide variety of instruments to choose from and supports importing your own sounds. But again, this is not a professional solution. LMMS, Ardour, and others are completely free and and have way more features.

I’d like to look at one more app. This time I’ll move away from the productivity category and move to entertainment. Let’s try one of the games.

The first game that caught my eye was Runescape (it was featured). I’ve never really been into MMOs so I wouldn’t really be able to give an unbiased opinion of it, but I would love to hear from anyone who does play that type of game. The graphics impressed me only in the fact that it was played through a browser. In comparison to other games it only would be impressive if you played it 6 or 7 years ago.

I did actually try Quake Live which was also in the store. However, in spite of the fact that it is available in the store, it does not work with Chrome. This app only works with IE 7+ or Firefox 2.0+. I’m assuming that since it is in the store that it will soon support Chrome (and thus Chromium) so I gave it a go.

The graphics quality of Quake Live lies somewhere between Quake 2 and Quake Arena. It plays very well, and the gameplay is very reminiscent of of Quake 3 Arena or Unreal Tournament.

On top of the free to play model, there are 2 subscription-based options for playing, the Premium Membership for $1.99/month and the Pro Membership for $3.99/month. The monthly fees are not worth it in my opinion. A game like this for purchase is worth at most $10.00 (being so dated) so I can’t imagine paying a monthly fee for this. The breakdown of the different subscriptions can be found here.

To go back to my earlier statement about Chrome OS being better suited for a tablet than a notebook, I’ll grant that a FPS would be more difficult on a touchscreen. But at the moment it’s not even compatible with the OS so I don’t know what to say here.

So, being in a beta still, it’s hard to say if this will be a success or not. My thought is that it doesn’t deserve to be as it stands, but it does have the Google name behind it so who can really say? Perhaps if they go the sane route and at least put in a virtual keyboard for those who want to install it on a tablet I might be a little more optimistic.

There is still the big elephant in the room however. I’m referring to Google’s habit of collecting data. Now this is open source, so if Google is collecting data right from the OS itself, we will soon know about it. But since all usage of this OS centers around your data being kept on a server "somewhere out there" I just cannot get behind it. There is no source code to look at from the web-based apps, and you can be sure that the user is not the only one that has access to the files stored in the cloud. But in this case my personal feelings are irrelevant.

Let’s pretend for a minute that this really takes off and overtakes iOS as a popular OS on the Internet. Will this help legitimize Linux for the masses as a viable choice on their PCs? Again I can’t help but be a pessimist. Android OS is currently more prolific than iOS for smartphones. Yet most people using Android don’t even know they are using Linux. I feel that will be the same here. Google isn’t going to go out of their way to tell people where the OS really comes from, or to inform people about free or open source software. They will push their brand name and tell people they are using Google. That’s it.

And that’s honestly a good thing. Google’s business practice contradicts what free and open source is all about and I really wouldn’t want the two being associated together. Linux will keep chugging along at its own pace. With the improvements and growth Linux has made in the last few years, it doesn’t need a name like Google behind it anyway. Linux is doing perfectly well on its own.

Posted in Computers, Current Events | Tagged , , , , , , , , , | Leave a comment

Multi-Monitor Support Forces More Use from Windows Partition

I’ve been struggling with multi-monitor support for a while now. I’m still not 100% satisfied, but I think I’ve got the best solution possible for my circumstances. I should probably share my system specs before I go any further..

CPU: Intel Core 2 Duo E8400 3 GHz
2 Nvidia GeForce 9800 512MB video cards
4 GB DDR2 RAM

This was what I wanted: SLI configuration, compiz effects, both monitors spanning 1 desktop (to drag windows from one monitor to the other), separate virtual workspaces for each monitor including different wallpaper.

The end result of much effort and several attempts is:

SLI simply does not work with a multi-monitor configuration at all. At least in any configuration I tried the 2nd monitor simply would not start in SLI. So if I wanted to use both cards, Twin View is not an option.

Separate X screen without Xinerama works but it was unusable for me as it caused some really strange things to happen. Nautilus windows just started popping up in the taskmanager and would not stop launching them until the system crashed. Until the crash, the two monitors worked however and compiz was enabled as well. With the separate X screens though, dragging windows from one screen to another was of course not possible.

I tried enabling Xinerama with the separate X screens, and this worked fine but for two things:

  1. No compiz effects. This was disappointing, but I could live with it.
  2. No KDE apps. Indeed, trying to start up any KDE application would cause the entire desktop environment to crash to the login screen. If I tried to log into the KDE desktop environment rather than Gnome, it would load and crash, again leaving me at the login screen. This I cannot live with as there are a few KDE programs that I just cannot live without.

That left with me with Twin View, which does work very well, except for 2 things:

  1. Separate virtual workspaces for each monitor are not possible. Xorg sees only 1 desktop , which means that each monitor must use the same wallpaper and switching virtual workspaces will cause both monitors to make the switch. It cannot be done independently.
  2. My second video card is now not being used. Now I would like to point out that SLI never worked very well anyway. Or I should say that it never worked very well when compiz was enabled. I actually switched to KDE briefly for this reason (I could have the plasma desktop effects enabled and SLI worked fine). I’m now really regretting going the multi-GPU route when I had this machine built. I wish I had just purchased one 1024MB card, but c’est la vie.

This current setup suits me fine for the most part. For graphically-intensive gaming however, I will have to rely on my Windows partition a little more than I do currently which is a real shame. I would like to point out that I believe blame can be laid squarely on Nvidia for lousy SLI support in Linux. In Windows I can have a multi-monitor setup with SLI enabled with no problems whatsoever. I know this has to be possible in Linux as well, but Nvidia just can’t be bothered working it into the drivers. And they refuse to release the source code so that others may do the job for them.

I’ll admit that compiz is not innocent here. But even with compiz disabled there is some discernible stuttering in the display when SLI is enabled.

This brings me to another point. when something quirky happens in Linux, it doesn’t mean that Linux is quirky. It means that some software running is having some undesired results. To put this in perspective, lets look at some software that runs on Windows that has less than desired effects. Perhaps the most common that I see is umpteen-thousand toolbars that people for whatever reason install in Internet Explorer. It has the result of not only slowing down the browser, but with all the spyware that inevitably comes with it, the whole system slows to a crawl (and why the hell are you using IE anyway?)

I could bring up more examples obviously, but you get the point. In either of these cases, the OS is not to blame. It is either the software being run, or a configuration problem (well it’s both in each case. The Windows example would include the configuration issue because if a guest account was used instead of the default administrator account, the spyware wouldn’t be permitted to install).

But I digress. I didn’t really mean to turn this into a Linux/Windows comparison. This was only meant to be a more personal post of my own experience.

If you have any experience with the issue I’ve described, please comment and if you have a solution I may have overlooked, please share it.

Posted in Computers | Tagged , , , , , , , , , | Leave a comment

Implications of the WikiLeaks Scandals

I’ve been following the story of Julian Assange’s arrest and of course the US government reaction to the cable leaks. I wasn’t going to bother with a post about this, but the more I read, the more it seems to occupy my mind. It’s going past the typical "amusing antics of the US government" and delving into Orwellian territory.

Now this goes beyond the concept of respectable or responsible journalism. Whether or not WikiLeaks falls into the above categories or is simply out to embarrass governments in the same way that tabloids seek to embarrass celebrities and other public figures is not the issue. What is at issue is the freedom to distribute information. In Western culture are we not free to discuss and inform anything we feel has importance, or might be important to other individuals? It’s becoming apparent that this is not so.

We have high profile public figures calling for Assange’s assassination. Large media outlets such as the New York Times are being blocked for publishing a few of the the leaked cables. The "rape" charges against Assange seem more and more like an attack on his character in hopes to discredit him rather than any kind of serious allegation. (Here is a detailed account of his crime and arrest. This is a very interesting assessment of the charges by acclaimed author and feminist Naomi Wolf.)

This whole situation is getting out of hand. Is this the death of real journalism? Depending on what is being reported, it seems that journalists soon may find themselves in a situation where their profession necessitates operating outside the law. I’m sure that Julian Assange would say that this situation is already here. It’s exasperating.

I find myself wondering how Carl Bernstein and Bob Woodward (the Pulitzer Prize-winning journalists who uncovered the Nixon-era Watergate scandal) would have fared in today’s political climate.

UPDATE:
This is a video from October 22 discussing the (then upcoming) Iraq War Docs leak that brings some perspective. Daniel Ellsberg, the famous whistle-blower of the Vietnam War in 1971, speaks. Part one of two.

Posted in Current Events | Tagged , , , , , , , , , , , , | Leave a comment

The Year of Linux (Take 19)

The general consensus (according to w3counter.com, which gets its data from web-usage only) the current Linux market share for the Linux OS is approximately 1.5% (as of October 2010). Yes there are other stats that give different numbers, but this seems like an approximate average. Some have Linux’s share at below 1% and more than a couple have it at above 5%. The specific numbers don’t even matter to me at this point. I only think that they should be better.

Now of course I’ve also heard the arguments that globally the percentage could be as high as 40%, taking into consideration 3 points:

  1. Countries outside of Europe and North America are not considered in most if not all data collecting websites.
  2. The population of China and India together roughly doubles the population of Canada, the United States and all of Europe combined.
  3. The governments of both China and India actively promote the use of GNU/Linux over Microsoft and Apple OSes.

I only point to this as a way to discourage the mindless Linux hate coming (mostly) from Microsoft users. I fail to understand where this hate comes from, especially considering that it mostly comes from people who haven’t given Linux a chance at all, or from those who tried it once over a weekend. But I digress.

My real question here doesn’t have anything to do with Linux popularity in various Asian countries. My question is what would it take for the market share in Linux to rise above, say, Mac OSX’s desktop market share?

I think I have an answer, but it will take a bit of explanation.

Linux is the No-Name Brand OS

At least to Windows and Mac users it is. Most people consider Linux to be the cheaper and less-usable alternative to the "name-brand" Windows and Mac. This, of course, couldn’t be farther from the truth but the only way to change people’s minds is to actually get them to use it for more than a day. And I don’t think that this can happen without a cultural change.

Consider the time-frame that Microsoft came to dominate the market.

In the early ’90s, Microsoft was still the young, upstart software company that was starting to do battle with the big, evil corporate Apple. Windows 3.0 was released in May of 1990. (3.11, the big one before Windows 95, was released in 1993).

Anyone remember what else was going on at the time? I mean outside of the computer world. Culturally speaking. Anyone?

There was Seattle, the whole "Grunge Movement", and everyone in any advertising agency freaking out because no one was listening to them. Fashion magazines had no idea what to do and were clothing models in plaid and knee high combat boots, selling outfits for $50 and less. No-name brand products, for the first time, were outselling name-brand products. People were about function, not style. And people loved to support the underdog. Western culture, in these three or four years of the early ’90s, was saturated with these ideologies.

People also started buying computers a lot more than they had ever before. Technology started to be more in the front of people’s minds in the mid ’90’s. I remember "multimedia" as the buzzword, but that was quickly replaced as the media started to become aware of this mysterious thing that hackers were now doing: "surfing the web". And the Internet began to be seen as a re-emergence of the Wild West, at least that is how it was portrayed.

Enter Windows 95, the latest offering from that young upstart software company that ran perfectly well on an IBM clone — the no-name brand computer. The timing was really perfect. Microsoft was well on its way to dominance, but already the function-not-style mindset was dissipating. Grunge was dead. And musical tastes were changing too. "Electronica" was now becoming a movement. Raves became popular again. And this edgy sounding music was all technology-based. Kids could make this sort of music in their basements if their parents had bought them a computer at some point.

Windows 98 was released in June of 1998. Advertising agencies were breathing a sigh of relief, and starting to relax a bit. The recession was over, and the US budget was balanced. People had disposable incomes again. The depressing Music of the early ’90s was gone. The Spice Girls, The Backstreet Boys and Brittney Spears were at the top of the charts. The Smashing Pumpkins disbanded in disgust. And Microsoft was king. By the time XP was released in 2001, America was thoroughly back in the name-brand, style before function mindset. As long as something looked pretty, it didn’t matter if the core was rotten. If it was expensive, it must be good.

Microsoft got really lucky. Windows had matured in a way that completely fit into the rest of America’s culture. Don’t get me wrong, Gate’s business sense had much to do with Microsoft’s success too, I’m just saying that the timing of it all expedited its popularity.

Considering this, where are we left? Well Microsoft is still undeniably king. No longer are they "the young, upstart software company that was starting to do battle with the big, evil corporate Apple". They are now the big evil corporation that people are going to Apple to escape from. (Fill this space with any ironic comment you wish).

But we are in a recession now. Why are people paying even more for a Mac, rather than switching to Linux which is cost-effective and ultimately better than both? I would say the reason is that the culture didn’t change with the economic situation like it usually does. The reasons for that are many and varied. Not the least of which is the anti-sharing media campaign put forth by the RIAA, MPAA and other groups. (And hey, telling people not to share is telling people not to use free or open-source software).

So to finally bring this around and include the reason for the title, will the year of Linux ever come? Probably. But not any time soon. If 9/11, two wars, the worst recession in decades and Michael Moore can’t seem to make a significant cultural change, then I just fail to see how it’s going to happen. And without it, I can’t see people abandoning their OS of choice en masse.

Unless of course it’s Linux that actually causes this cultural shift I’m waiting for… hmmm…

Posted in Computers | Leave a comment

Richard Stallman is Right

I’ve been "dabbling" in Linux for a number of years now, but I’ve really only given it a serious try in the last few of months. During that time I’ve become very interested in the Open Source and Free Software philosophies. I’ve watched documentaries on the history of Linux and the FSF, and videos of Richard Stallman’s lectures.

Let me say first that in the history videos I’ve seen, Stallman really does come off as the freedom fighter that he seems to see himself. I really admire what he’s done, particularly in running with his ideas and finally coming up with the GNU GPL. His more recent lectures, however, leave a sour taste in many people’s mouths, including mine. He seems more like a stubborn zealot than a freedom fighter. To demonstrate this, let’s have a look at the Free Software Foundation’s website, specifically the list of approved free OSes, and why some are not included.

My current OS of choice, Ubuntu, would definitely not make it in the list. Ubuntu offers me the choice right at the point of install to use proprietary codecs and drivers. Some included repos have proprietary software that I could install using the included package manager. Indeed, on my personal system I am very conscious of this. I am using the Adobe Flash plugin for my web browser, and I am using the proprietary Nvidia drivers. I have Skype installed. All of these things would exclude me from being a part of the FSF community.

So what if I wanted my system to be 100% free? I’ve some experience with Debian, I have a server here running LMDE. So I could switch everything to stock Debian, a distro that many, many other distros start from. Well even Debian is not on that list. And the reason is that there is actually proprietary code within the stock Linux kernel. Proprietary kernel drivers to be more precise. It also offers proprietary software through its repos.

This in of itself, that the FSF condemns Debian as a non-free OS, is enough to turn most people away. But there is another side to this coin. There are real, inherent dangers to proprietary software, and the FSF is really only trying to avoid those dangers, and warn others of those dangers at the same time. The more proprietary software that the kernel depends upon, the bigger the danger that it will at some point have to go backwards to replace those proprietary blobs in order to remain free (as in beer). There is always the chance that those blobs will some day come only with a price tag. Or cease and desist letters. If that day eventually comes, those blobs may very well have to be replaced using code from the FSF (accompanied by a very smug [and justified] "I told you so").

Some may say that this is overstating things a bit, and in the kernel’s present state that may be. But the list of proprietary blobs occupying space in the kernel seems to grow with every release. Will the day come that Linux in its official state will have to be considered proprietary? Its present growth certainly indicates that as a possibility. Or at least that the proprietary code will become predominant over the free code. And all that proprietary code is owned by someone. As Linux grows in popularity (it does experience exponential growth every year), do you really think that at no point in time any of the companies that own that code are going to start withholding it pending licensing and payment? It’s simply naive to believe that. That cost will then have to filter down to the user in order for Linux to continue in development. And the biggest side-effect would be (is?) that as Linux goes down the commercial path, it becomes more and more like the restrictive, closed environments that all of us, developers and users alike, wanted to avoid in the first place.

It’s either that or Linux as a desktop will have to take giant leaps backward in functionality (and all the bad press that comes with it).

Thinking in this direction, I begin to see Stallman’s point. This could all be avoided if the "narrow winding path" was chosen and all proprietary code was rejected in the first place. It would be a slower process, sure. Many things that can be done on a Linux desktop now wouldn’t be possible–yet. All that proprietary code would have to be rewritten from the ground-up, reinventing the wheel as it were. But there would never be the worry of anyone owning the system. It’s owned by us all.

This makes me thankful for the Free Software Foundation. In avoiding the "wide and easy path" they are ensuring the future of the Free Software philosophy. They are ensuring that computers will in some fashion always be affordable to virtually everyone.

So will I be switching to an FSF approved OS? At some point I think I’ll have to, but that will not be anytime soon. I’m still dual-booting Windows. But when that day comes, boy will I be grateful that Richard Stallman stuck to his guns. He may be a thorn in the side of the Open Source movement right now, but he certainly has his place.

I think the Open Source movement needs to be reminded of its roots now and again, it needs people like Stallman to point out that this direction goes against the ideals that prompted the creation of GNU/Linux in the first place. It may not be nice to hear at times, but sometimes the truth hurts.

Posted in Computers | Leave a comment