Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, December 19, 2010

ChromeOS and Me

Just three days ago, a mysterious package arrived at my door. It came from an unfamiliar name in Kentucky. Being near Christmas, I thought perhaps my wife ordered something online for me, so I set it aside for her when she got home. She inspected it when she arrived and was equally confused by the box and return sender address. I insisted I didn't order anything, so she started opening it. Inside, she found another cardboard box with a graphic on it.


Well that's... interesting. Baffled, she brings it to me. I inspect the box and notice the lithium-ion warning on the bottom. After a moment of trying to figure out what in the world it could be, I remembered the ChromeOS pilot program form I submitted no more than a week earlier. Thanks, Google!



In the last three days, I've been doing my best to include the CR-48 ChromeOS notebook into my day-to-day life. It's really such a handy companion. The 12" form factor and super lightweight design makes it easy to lug around with you everywhere. It's thin, minimal, and does exactly what it's supposed to. The roughly 13-second boot-up time and a hair over 1 second wake-up time means the CR-48 (Mario) will be at your beck and call at a moment's notice. I can get to the web faster by waking up the CR-48 than I can by waiting for my Android's web browser to load, and that's impressive. Did I mention roughly 10 hours of battery life? Especially on the cusp of utter battery failure in my Dell Inspiron (with a mind-blowing 10-15 minutes of battery life), the CR-48 has reopened the doors to anytime quick access to a computer. I don't have to worry about hunting for a power cord.


The hardware is impressive for a test platform. The keyboard feels great, screen resolution is impressive (higher than my 15" Dell!), and the integrated microphone and webcam are a nice addition. I've heard this thing has bluetooth, but there doesn't seem to be a way to utilize it in the OS yet. The touchpad is almost identical in size to the Macbook touchpad's area and features only a single clicking button nestled below the touchpad's flush surface. On the left edge, the CR-48 notebook features only a VGA port, a vent, and the left speaker. On the right resides the charging port (would have preferred this on the back), USB port, headphone jack, MMC/SD card slot, and the right speaker.


The CR-48 isn't without it's problems, however. Although very light and thin, it can feel somewhat flimsy. Every time I use the tap-to-click on the touchpad, there's an unpleasant "thud" as the case impacts the internals. A few extra screws and maybe a mil or two thicker materials aught to solve that easily enough though. The touchpad itself is also a nightmare. The area above the touchpad's button is motion sensitive. Imagine laying your finger flat on any laptop touchpad, then roll to the tip of your finger without lifting up; the cursor will move. That's what happens on the CR-48 when you depress the button. Dragging and/or selecting groups of things is equally painful at times as the multi-touch results in a jumpy cursor and if you're not careful, resting an extra finger on the touchpad renders it useless. The display also seems far too cool and the contrast is washed out on all but the brightest settings which can strain your eyes indoors. Considering this is what you're looking at the whole time you're using it, some improvement on the display would be a godsend.


What about the software? I think I'll wait for another blog post to go further in depth on that subject so I can better elaborate. I already expressed my concern that Google's lineup of web apps aren't ready for the concept of ChromeOS, so I'll revisit that idea later. Keep an eye on my Google Chrome OS Notebook set on Flickr as I update it with notable shots as I discover new things.

Saturday, December 4, 2010

CakePHP JSON Response Data in Headers

I recently utilized CakePHP's ability to automatically adjust view parameters based on the requested extension in the URI. This makes it really easy to specify a JSON formatted response by just tacking ".json" to the end of the URI. When I discovered this handy little feature of Cake's, I was praising the framework for such a quick and useful feature. Until things went wrong, of course.

I do most of my development in Chromium, but noticed when I was testing my code for the new JSON responses, data wasn't showing up properly in Firefox. I didn't think much of it and figured I would return to the issue once I could focus on browser compatibility. I eventually, however, found myself at that point in development. I discovered that returning only a sliver of the normal response data lead Firefox to respond properly. I also found Chromium was hitting it's own, significantly higher, limit. Firefox was capping the JSON response at 4096 bytes, whereas Chromium was somewhere in the area of 305kB. The Chromium limit was practically reasonable for my application, but there was no way I could dodge Firefox's measly 4kB cap. But wait, there's no way Firefox limits all AJAX responses to 4kB, right? Gmail can't possibly operate on an army of tiny 4kB responses, so what am I doing wrong?

Courtesy of Firebug, I compared responses and noticed something peculiar, my server's JSON responses contained the entire data contents in the headers of the response itself. After some Googling, it seems response header fields have a size limit. In Firefox that limit is, you guessed it, 4096 bytes. As I found, CakePHP's automatic handling of JSON extensions takes the view variables and places them directly in the headers. Not only that, but my JSON views were returning the json_encode()'ed result of my data, so the response contained the same data in two places. Without much time available for finding a proper way to make Cake stop adding data in the headers, I decided to go back and do it the ol' fashioned way, by myself. I got rid of Cake's router line for Router::parseExtensions('json'); left my controller action alone, and changed my view to:
[php]<?php
$this->layout = 'ajax';
Configure::write('debug', 0);
header('conten-type:text/x-json');
header('cache-control:no-store,no-cache,max-age=0,must-revalidate');
echo json_encode($results);
?>[/php]
And with that, both Firefox and Chromium now accept responses greater than 4kB and 305kB respectively. I'm still sprinting to finish this project, so I haven't had time to investigate if the core Cake team knows about this issue. I plan to revisit after project launch and see if I can learn more about this strange behavior.

Monday, November 29, 2010

Sorting Related HABTM Model Data With Cake's Containable Behavior

Quite some time ago, Felix Geisendörfer created a behavior for CakePHP called `Containable`, which has since been added to CakePHP's core. I read about it and it always looked pretty interesting, but I dropped out of the CakePHP realm for a few years while I was busy completing my CSE degree. But now I'm back, and I stumbled upon a need to sort related data from a HABTM relationship. My heart filled with dread as I contemplated the ways to do this, mostly with CakePHP trickery and slow performing SQL queries. But luckily, Containable came to the rescue! I found a blog post that was extremely helpful, but it was written for the pagination helper, which I wasn't using. So below is the solution rewritten for regular find() calls to a model. Here's what my association looks like:
Event hasAndBelongsToMany Image

And I needed to make it so that when retrieving an event, the associated Image data was sorted by a specific field. Here's how to do that using Containable:
[php]public function myAction($order) {
$this->Event->Behaviors->attach('Containable');
$this->Event->contain(array(
'Image' => array(
'order' => $order
)
));

$event = $this->Event->find('first');

$this->set(compact('event'));
}[/php]

Well that was easy... hopefully I just saved you from a world of hurt.

Sunday, November 28, 2010

PHP Ternary Reference Assignment

I'm a fan of CakePHP, but some times it can do some goofy things. One such example is the afterFind() Model callback, whose $results parameter may contain data in two different formats depending on how the find results were retrieved elsewhere in the code. This breaks the blackbox concept of functions in which the function shouldn't care less of what happens outside it's scope. Because of this oddity, you have to support either the model data existing in the $results array under a key named after the model, or the data existing directly in the $results array as simple key/value pairs. I attempted to make this easy to deal with by doing the following:

[php]$res = ($primary)
? &$result['MyModel']
: &$result;[/php]

But for whatever reason, PHP fails with an "unexpected &" error after the ternary `?` operator. The ternary syntax is just a shorthand I use for a really easy if-then, so I decided to expand it and see what happens.

[php]if($primary) {
$res = &$result['MyModel'];
} else {
$res = &$result;
}[/php]

This code runs without a hitch. So then what's happening? Your guess is as good as mine. I honestly can't discern what's making PHP break on the ternary version. Although the ternary syntax is rather simple and can be considered synonymous to the if-then code, PHP seems to handle them differently and consequently breaks reference assignment when using ternary.

Thursday, October 7, 2010

Deleting Duplicate Rows in SQL Server Based on a Single Column

An import from a client's product list resulted in some incorrectly duplicated products. Normally this would be easy to sort out, except that the duplicates differed by a category column, so the good ol' DISTINCT keyword was of no help. This would also have been easy to do if working with MySQL by using a GROUP BY clause to select a row by distinct instances of a particular column. SQL Server, however, is another story. As I found, if you try the GROUP BY trick, SQL Server will whine.
Server: Msg 8120, Level 16, State 1, Line 1
Column 'ItemNumber' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

I found some reasoning for this online, which is all well and good, except that MySQL does this like a champ and makes life easier as a result. So off I went to find a solution for SQL Server... I eventually stumbled on a page from the SQL Authority, by Pinal Dave (a very humble guy, judging by the title of his site). The gem, however, wasn't the post itself but a comment by Madhivanan further down the page. Thanks to Madhivanan, I came up with the following solution.

DELETE t FROM

(

select (row_number() over (partition by ItemNumber

order by ItemID)) as cnt, * from Item

) as t

WHERE t.cnt > 1

This snippet will give you a count of how many times a particular item number appears in my database's Item table. We isolate only the ones that appear more than once and delete them from the table. Not as simple as using a GROUP BY, but it gets the job done.

Friday, March 26, 2010

Hyperterminal Replacement For Linux

This is a little gem I found recently when I needed to communicate with my OpenLog board. A lot of tutorials want you to pull up HyperTerminal in Windows to talk to OpenLog over the USB->UART bridge. My biggest problem with this is that I'm primarily a Linux user, where HyperTerminal is unavailable. That's all well and good, however, I'll just boot into my Windows 7 install and do it, right? Wrong. Windows 7 no longer ships with HyperTerminal. I jumped back to my Ubuntu Linux install and started hunting for a HyperTerminal replacement. The best solution I ended up finding was a command-line application called minicom. This handy little app is a bit to get the hang of, but once you're using it, it works like a charm. Here's a quick rundown of connecting to a serial device over USB using Minicom.

Wednesday, March 24, 2010

Flashing OpenLog Firmware in Ubuntu Linux

I recently hit a bug in the Sparkfun OpenLog v1.1 firmware that left the device useless. I found out the hard way that version 1.1 only supports up to 255 log files. Once it hits this limit the firmware doesn't know what to do with itself and loops endlessly. This even prevents you from entering command mode where you could otherwise reset the log number. Nate Seidle at SparkFun quickly released an update, v1.2, to correct this problem I was having. But now I had to figure out how to flash the firmware on my OpenLog. It turns out the process is extremely easy in Ubuntu, but the GitHub documentation targets mostly Windows, so I decided to document the process for Ubuntu users from start to finish here.

Thursday, January 14, 2010

8-bits of Processing Goodness

The last semester of my undergrad program in CSE is finally here! This semester I have a Senior Design course where students form their own groups and come up with an idea related to the curriculum and implement it. Now if only I had an idea of what to design...

I've been participating in autocross events for years and have always wished I could afford a data acquisition system like those you see used in Formula 1, Indy, Le Mans, and/or NASCAR. With a DAQ, I'd be able to see exactly what my car is doing at an event and use that information to help me drive the course faster on my next run. Hopefully.

That got me thinking. Surely I'm not the only amateur autocrosser wishing for an affordable data acquisition system made for the weekend warrior. In fact, LOTS of people across the nation, and across the globe would probably love such a system. I didn't know of any that existed on a college student budget, so I figured why not make one? It just so happens this senior design semester is the perfect time to get started. But where to begin?

Sunday, July 12, 2009

Google Apps aren't ready for Chrome OS

Google recently announced their Google Chrome OS project that will see release some time in 2010. It certainly looks to be a promising idea, though I can't help but think that the project's success lies heavily on a very small window of implementation decisions. The slightest deviation from the "perfect" solution could make Chrome OS more a gimmick than anything. As the OS aims to be the most web-driven ever designed, it's clear Google will use their slew of online applications to support the OS. In their current state however, the use of Google Apps as a true replacement for desktop applications is laughable. The biggest flaw in the implementation of all Google's online applications is the necessity to sync the app before going offline. In their current state, if I were to go offline and maintain my current online experience, I would have to sync my Google reader feeds, my Gmail inbox, and my Google Docs files before going offline. How is Chrome OS going to be useful when it's applications are worthless during unexpected spurts of offline use?

You might assume that Google's offline technology, Google Gears, just isn't capable of seamless online/offline transitions, but a quick look through their developer tutorial proves otherwise.
In a "background sync", the application continuously synchronizes the data between the local data store and the server. This can be implemented by pinging the server every once in a while or better yet, letting the server push or stream data to the client (this is called Comet in the Ajax lingo).

The benefits of background synching are:
  • Data is ready at all times, whenever the user chooses to go offline, or is accidentally disconnected.
  • The performance is enhanced when using a slow Internet connection.
The downside is that the sync engine might consume resources or slow down the online experience with its background processing (if it's not using the WorkerPool). Using WorkerPool the cost of synching is minimized and no longer affects the user's experience.
This is exactly the type of synchronization feature that Google's apps need to make Chrome OS possible. Without it, Chrome OS will never succeed. Gears has been capable of background synchronization, yet each time Google adds offline capability to one of their apps they force you to manually sync your data before going offline. I've never understood this strategy and now it appears they must revise their offline strategy for the Chrome OS project. I'd imagine Google to think more ahead than this, but let's hope they catch on and we start seeing background sync rolled into their web apps as the Chrome OS launch nears.

Update (2009-12-23): It looks like Gmail has the most robust offline support of all the Google web apps now. Reader still forces you to manually sync, you can't create new documents in Docs if you're offline, and Calendar still only lets you look at your events without changing anything. This is promising though, as Gmail is now a suitable replacement for a desktop e-mail client. One step at a time.