Zach Lendon's Blog

Mobile, iOS, Grails and more

Starbucks’ Big Missed Opportunity

Starbucks stores have traditionally been both an individual workhaven and a meeting place for professionals. However, with limited seating, uncomfortable chairs, an often loud environment, and limited outlets, Starbucks is not in the business of providing their customers a coworking location. However, they should be.

Coworking is a trend that is growing in strength and revenue for companies such as Regus. Professionals are not only reporting improved performance, creativity and self-confidence in these workspaces, they are reporting that they are feeling healthier in them as well. Coworking spaces provide not only desk areas for individuals to work at, but often meeting spaces, copier/printer services, and more. Cafe Inc, a Minneapolis-based startup, joins a growing trend nationally of “proworking” locations - coworking locations that provide additional amenities such as a lounge, cafe, and coffee shop area alongside traditional coworking amenities (photos).

Most Starbucks stores as currently constructed do not lend themselves to coworking or proworking from their customers. However, Starbucks has shown an interest in branching into the “cafe” space as they continue to expand their food offerings. Starbucks also already offers limited lounge areas in many of their locations. Having stores that have traditional seating areas as they exist now, but also premium spaces where Starbucks could upsell professional coworking services, especially on a day pass basis, can meet a market need that is currently going largely unfulfilled. Many current Starbucks locations could not support this model, requiring either new locations to be built with this in mind, or retrofitting existing locations (one’s that are not standalone where accompanying space could be acquired). Starbucks has shown a willingness to tweak their brand and innovate of late, and this is an opportunity for them to become even more ingrained in their customers’ daily lives. Workers who show a willingness, at the spur of the moment (or even planned in advance) to pay to have their business meeting in private, or use a secured wifi in a more private, comfortable setting for their own work, also would be likely to buy more beverages and food from the Starbucks store adjacent to their short-term professional work area. Shouldn’t Starbucks be in the business of meeting their customers’ needs in a way that also maximizes Starbucks’ potential profit? If they don’t, I believe other businesses will step into their place and not only cement a missed opportunity for Starbucks, but slowly start to erode at portions of their core business.

Professionals already are going into Starbucks to conduct business and to work - and Starbucks wants that, to a point. Starbucks should consider embracing what their customers are already doing and better execute at profiting off of it by providing proworking services in the not too distant future.

Grails Event Push Real-world Notes

Today’s excellent talk by Colin Harrington at GR8ConfUS on ASync and in particular the Events Push plugin has me wanting to note some thoughts I’ve developed through using the plugin at my current client.

First off, I use the following BuildConfig.groovy definition, so that I can exclude the resources plugin version used by the plugin:

    compile("org.grails.plugins:events-push:1.0.M7") {
        excludes 'resources', 'org.atmosphere:atmosphere-runtime'
    }

A Sencha Dirty Store Records Quirk

Sencha provides some pretty slick out of the box grid capabilities, one of which, editing, is demonstrated in the ExtJS 4 example shown here.

One example of how Sencha does a great job making the easy hard and the difficult easy (just an opinion I’ve fashioned over the past several months) is the handling of editing empty cell values. For one of the apps I work on, I read XML (yes, awesome I know) from a web service data using Groovy’s HTTPBuilder, which under the covers uses XmlSlurper to parse said data. When the data is empty, it is by default parsed and represented as “”. When you edit a field in the Sencha grid however, it gets changed to be represented as null. This therefore marks a record as modified in the Sencha store, when it really hasn’t been. Probably the proper approach to deal with this is to write logic in the Sencha store update event to check modified records for this “”/null shenanigans and not allow the update on the modified record to occur. Once you start making modifications like this for several store objects you’ll probably find it best just to get your OOP on and create your own Store object that extends Sencha’s with goodies like this.

Ultimately Sencha should in my opinion not mark these records as modified for you. But it is just another example how Sencha is great at doing heavy lifting but leaves lots of little pieces around for you to deal with - many which you probably shouldn’t have to.

Updated ExtJs4 Mock Ajax Library for Jasmine

@kenspirit does a nice job in this series (part 1 and part 2) of blog posts talking about some of the pain points (Stores, Ajax handling) of unit testing ExtJS applications. While @kenspirit provides a nice adaptation of jasmine-ajax for ExtJs (as jasmine-ajax only supports JQuery and Prototype currently), that adaptation does not work with ExtJS4. The following updated version should work - at least for basic Ajax mocking. Let me know if you have issues and I can (futher) update and improve upon this latest adaptation.

ExtJS4: Fun (Not Really) With Styling Checkboxes in Grids

‘boxready’ is a good event to listen to to change checkbox CSS styles programmatically when using CheckboxModels in an extJS4 panel where the ‘grid’ is defined as an ‘item’ in a panel. When you are defining an actual ‘grid’ component though (i.e., when you are doing more than simply tying an extJS ‘store’ to a standard grid component via an item) and want to change checkboxes within a grid listener, you’ll want to use the ‘viewready’ event instead - with a ‘defer’ to boot. I admit that it seems that there should be another event that one could use and not have to do a ‘defer’ timing hack, but so far my attempts at trying other events have proven fruitless (I’d love to be advised differently). All of these events in question are shown in AbstractView’s source, and when using extJS4+ it’s worthwhile to understand all that goes on in this class, especially event-wise.

The reason one needs to do this styling logic within these events is that the style changes must be applied after everything in your component is visible and any styles are calculated and applied to elements within that component. While you would think you could do these style settings when you instantiate the CheckboxModel and have them honored, and there are random illusions to this working online - you will find that it will fail you under certain scenarios. As an example, ‘headerConfig’ in CheckboxModel has a headerWidth property. Howevever, if you look at CheckboxModel’s source, setting the config does not appear to actually change it’s value (and in practice this is what I’ve seen). In certain grid scenarios, depending especially on the ‘flex’ property of other columns, you may (ok - will) find ExtJS4+ re-sizing your checkbox columns to excessive sizes - usually on the big end, but potentially too small based on your requirements.

While it is possible and in practice probably better ‘code quality’ to actually get the grid or panel component (using Ext.getCmp() - by id) and then access the CheckboxModel using a ComponentQuery selector - and then change it’s width, another approach is to do it in a more JQuery-like fashion (in Sencha syntax of course). I quite honestly find this to be a bit easier to do, and when working with Sencha, sometimes easy is really welcomed. This is also satisfactorily safe to do in my opinion if you don’t have lots of elements on a page such that the querying performance of these operations will impact the usability of your app. With a pretty complex app I’ve not seen the below queries suffer performance-wise:

The above code would change the checkboxes in the first row of a grid - the first item finding the styled header checkbox (so you can select all rows) and the second loop finding all rows and grabbing the first element from that row. Yes, you don’t need the ‘each’ necessarily in the first scenario, but it doesn’t hurt anything either.

While extJS4 and Sencha Touch continue to prove to be very powerful frameworks, understanding their nuances and pain points continues to be an interesting journey to be embarking upon.

Thoughts on Testem vs Testacular (Karma)

Both Testacular - recently renamed ‘Karma’ and Testem are great test runners for improving your javascript unit testing workflow. While Karma was developed as part of AngularJS, it is certainly useful as a test runner for javascript unit tests regardless of frameworks and/or libraries leveraged. Key features that both Testem and Karma have include:

  • Support for running/driving multiple browsers simultaneously, including headless browsers (i.e., PhantomJS)
  • Support for the main testing libraries out there - QUnit, Jasmine, Mocha, etc.
  • Both will watch the source/test files (that you selectively configure) for changes and automatically re-run tests
  • Both provide support for local and CI use
  • Both are terminal focused

For me I have found that I have a slight preference for Testem, for a few reasons. One is the Text User Interface:

which allows you to better visually see the test results by browsers, whereas Testacular provides this as straight output text lines:

Secondly, the ability to also run tests from a browser is a nice alternative when the terminal is not providing you as much flexibility as you want in certain testing scenarios and development workflows. Configuring this option alongside testem is much more doable than alongside Karma. The minimal configuration that is required is described in the ‘Node Travelers’ Toolchain blogpost (in the icing on the cake section). As is the case in the terminal, the main benefit over a normal Jasmine/browser TDD workflow is that with this testem integration we get watching of changes and automatic reload of the browser on changes. As a small aside, I’ve always found the Jasmine Bootstrap Reporter to be a great reporter for browser Jasmine reports and significantly better than the default Jasmine Html reporters.

Do note that when using testem with Jasmine that you may have to slightly modify your Jasmine javascript source to ensure that the #testem hashtag is honored when choosing to run a suite of tests or individual tests from the browser.

On the Karma side, I must admit that Karma does seem to have nice debugging integration into Webstorm IDE, an IDE that I’ve tried but never really used extensively. I also know Sublime Text 2 has similar javascript debugging support available of late, so it’s possible that Karma’s debugging support is stronger than Testem and that leveraging it on an ongoing basis would prove its value. You’ll note in the Karma documentation “video” that support for “dumping” object state, console logging to the terminal, etc. provides a pretty strong workflow from Karma for those who want their javascript development TDD workflow view to look like a Sublime Text editor on one side of the screen and a terminal window on the other. This editor/terminal workflow is philosophically inline with what Testem is aiming for as well though, so I’m not sure you’re really losing this if your workflow fits one IDE/coding/execution paradigm vs. any other.

Ultimately both Karma and Testem will work for your javascript unit testing workflows and they are better than the alternative - no javascript unit tests and/or no javascript unit test runner. I have found evidence that helps confirm my impression that Testem seems to have been around longer as a project (and thus be a bit more mature), have a bit better documentation, and be more widely used. I’d also be remiss not to say that I have also found it mildly annoying that Google Search Result links to “Testacular” Google Groups posts are “dead ends”, as the group has been renamed (to Karma). When there is ultimately such a small difference between a set of frameworks and/or libraries, it is to me these little things that can add up and - for now - would make me lean towards and recommend Testem. That being said, hopefully we’ll continue to see innovation from both of these tools and therefore hopefully the story on their usefulness and viability is only in the early stages.

Embrace Your Javascript Overlords

My first job out of college was working on OfficeMax.com (for OfficeMax) in 1999, where I primarily wrote a combination of client-side code and server-side javascript, run on Netscape Enterprise Server. For several years thereafter, I attempted to run far away from this ‘javascript’ world, as javascript at the time seemed to be a mess to deal with (this is well pre-JQuery, let alone all the other javascript libraries/frameworks of today), and languages such as Java were where it was happening. I can remember going to Java One for a few years in the early 2000’s and the buzz there was very WWDC-like. It seems hard to believe in this day and age I’m sure. But it’s indicative of the tech industry being both cyclical and the fact that today’s hot technologies are tomorrow’s not quite so cool (but still widely used) technologies.

In the past few years I’ve strived to leverage the Java and object-oriented knowledge I gained from several years prior with other dynamic JVM languages and frameworks, as well as pried my way onto native and mobile web initiatives/projects. Aside from native mobile application work, I’ve found that working with javascript has been best at providing me with an ever-increasing amount of development enjoyment. The innovation in the space is often mind-boggling, and many of the solutions I run across are amongst some of the most elegant libraries and frameworks around today.

That being said, integrating client-side javascript libraries and frameworks with non-javascript-friendly (more on that in a moment) back-ends produces a set of challenges. There’s state synchronization, rather manual synchronizing of changes, wiring together script packages/packaging, CSS compilers, code minifiers, client-side MV+ frameworks, templating engines, client-side history, ORM, database, etc. And that’s just for starters. While this certainly can be managed by seasoned developers, after doing the work of adding all these pieces, wiring them together, testing them and maintaining them, one at some point has to ask themselves: “is this the best way to be doing this?”

I’ve long since asked the questioned and told myself “no” many times. That being said, few web projects are greenfield and rarely - or basically never - are decisions that drive technologies used at companies politics-free. Certainly though I’ve reached a level of exasperation with it. Frameworks such as Derby, Socketstream and Meteor are either built upon or provide out-of-the-box (or optional yet rather easy) integration with popular libraries such as Node.js, Express, Socket.IO, Browserify and MongoDB. And many more. One of the challenges I see in the midwest as a developer is that there has been so much investment made my organizations and developers in the Java stack, and to a lesser extent Rails, that moving to these other stacks is an enormous challenge. There’s misperceptions out there I’m sure that provide excuses for resistance: performance issues, documentation issues, SEO issues, maturity, etc. As I similarly alluded to earlier, these are the same stories that get thrown out in the early part of any adoption cycle for impending technology trends. Some of them have validity to a degree, but they are widely overblown. From my vantage point, being that I’m a strong believer in the “realtime” web replacing the “dynamic” web we see today, platforms such as Node.js - or Vert.x - are our web application platforms of - at the very least - the not too distant future. And what language works on all these platforms, and all of the frameworks I mentioned above? Javascript. That’s why I say embrace it. That’s why I pushed in some talks I gave last year to “learn it” - to understand it - and most importantly, to know how to use it properly.

I’m hopeful in 2013 that I can - at the very least - help push the conversation at local companies and with local developers in my area forward on the types of technology stacks I’ve mentioned above. I have ideas for talks, blog posts and demo apps (not chat apps…) ready to be explored, to excite others, to help show the possibilities and dispel the myths. In short, I’m looking forward to helping others embrace our javascript overlords. If the interest and ideas are out there, I would certainly be very interested in joining forces with other local developers in this fight as well. We can either whine about the state of affairs at clients and companies (a trap I personally at times fall into), or we can actively work to show why there is a better way.

Using Tincr With Grails for Live Client-Side Reloading

There’s various solutions out there for seeing client-side changes quickly in a browser. One such solution, livereload.com, was mentioned by Ted Naleid in this tweet. While I don’t have much experience with livereload, I’m not completely convinced I’m doing it wrong (as he suggests somewhat tongue-in-cheek) either (though it wouldn’t be the first time I’ve been wrong). I have been using another solution, and I wanted to share with you how to start using it with your own Grails application, if you so choose. The solution I have been leveraging for live-reload-“like” functionality is the Chrome extension Tincr. This post attempts to give you a quick guide to leveraging Tincr with your Grails 2.x application and talk briefly about how it helps you iterate your client-side development efforts quicker.

Once you install the Chrome extension, it will show up as a tab in your Chrome Developer Tools view. your Chrome Developer tools, like so:

You’ll notice that I choose the Configuration File Option in Tincr:

'Configuration File option'.

This allows me to customize the mapping, ideally through regular expressions, between the project’s resource files and where they are located in my project. I do this mapping because I have had issues with it working simply with an http web server, though others might have better success with one of the other pre-configured options.

Nevertheless, here’s an example tincr.json file, which you need to put right under the web-app folder of your project.

As you can see, this basic JSON simply maps js and css resources under the web-app folder, which I set as the Tincr ROOT folder in Chrome, to the project’s js/ and cs/ folders. It works recursively for those directories as well. You can of course get more fancy depending on how your project’s resources are defined.

One ‘gotcha’ to watch out for is resource bundling. To get this to work (at least without major pains), I turn resource bundling off in the Grails 2.x apps I use Tincr in by adding:

grails.resources.debug=true

in the development environment config section of my project’s Config.groovy file. While this adds additional parameters to my resource files, it breaks them out of Grails’ default bundling strategy and more easily allows Tincr to do its magic.

As for that magic, Tincr allows me to make changes to javascript or CSS files in the browser (in Chrome Developer Tools), use Cmd+S (save shortcut, this being the Mac version of a save shortcut), and have the changes be saved back to the file system (and viewable instantly in my IDE). On the reverse side, as you can see in the Tincr documentation, you can define a ‘fromFile’ JSON attribute, which will allow you to save a file in your favorite IDE and have Chrome bring in the changes without reloading the page in the browser. Luckily, in this simple configuration example, Tincr is smart enough to reverse-engineer the ‘fromFile’ mapping, so defining it is redundant, and I have therefore not done so.

So hopefully this provides you an impetus and a guide to start to “tinkering” (you knew it was coming…) with Tincr in your Grails application!

Four End User Advantages of HTML5 Apps for Mobile Devices

Rarely a week goes by where there is not another article about “HTML5 Mobile Webapps vs. Native Apps.” Like criticizing Apple, these articles are great at generating traffic (and money?) for the hosting website but often settle little and rarely provide much value for either audience. That being said, seemingly everyone has an opinion on these topics, so don’t expect the articles to end anytime soon. Before I take a stand on one side of the aisle - in order to rebutt points in a specific article I’ll mention shortly, I should preface this post by saying that I hate the ‘vs.’ argument of mobile apps - I believe they both have their time and place, and for enterprise customers I often think “both” is the correct answer. With that out of the way - the latest in the line of these ‘vs.’-style articles that was brought to my attention today was Jeffrey Sambells post: “On Building HTML5 Apps for Mobile Devices”. While discussing the article point by point is very tempting - such as the incorrect summarization of Facebook’s current stance on HTML5 (they still get much more non-native mobile traffic than native, and HTML5 is still very much in play at Facebook) - the point in the article I want to address is:

  • Where’s the end user advantages (for mobile web)?

Well here they are - a list of 4 of the top “end user advantages” for mobile web applications:

  1. Mobile browsers crash less frequently than your native app. Users get pretty annoyed when apps crash.
  2. Not everyone wants to download an app. ~30% of mobile users have never downloaded any app. If they don’t want to download your app, but want to use your product on their mobile device, having a mobile web app is your other option.
  3. Some native applications will not work on your device - or don’t exist for your device. If you are using an older iOS version, or certain Android devices/OS’s (for example) - or are part of the #wearethe3percent Window Phone crowd or 1 of the 30 people still using Blackberry devices, then mobile web apps are often your only option to reach these users.
  4. Some use cases are better suited for mobile web applications. This Mashable article under the Content Usage section does a decent job of summarizing such use cases. Additionally, people who are travelling, especially in slower bandwith areas, will often be able to more quickly access the information this type of information via a mobile web app.

There are definitely points in Jeremy’s article that I very much agree with - including the mythical fallacy: “I can just generate a native app from my mobile webapp using product X and it’ll be great!” In the end, mobile and native aren’t going away anytime soon, and there are very strong arguments behind, and reasons for, leveraging each approach as part of an overall mobile strategy.

Log4Javascript - Quick Intro and a LocalStorage Custom Appender

With the continuing shift to the client-side for more and more processing in today’s web applications, be they mobile-specific or not, effective logging of the running state of the client-side part of your application is critical. There are a bevy of different solutions to your client-side logging needs, and I present this customization of Log4Javascript as not an endorsement of Log4Javascript as any sort of holy grail - but it is inevitably a demonstration that it is a viable, customizable solution that you should consider if you are working on a project that has needs in this area.

Log4Javascript comes with a collection of appenders and a grouping of logging levels that gives you the type of logging you’ve probably grown accustomed to in your server-side development efforts. Hop over to this JSFiddle and look at a sample Hello World type example that would output a log message to your Browser Console

Beyond having appenders that write to the console, out of the box, Log4Javascript includes appenders that write to popups, alert, and submit ajax requests to the server. Using any combination of them are pretty simple operations - they include different options, and can be combined to provide a flexible yet powerful logging strategy.

The proposed LocalStorageAppender I reference adds to this toolkit by providing the ability to store log messages in a browser’s LocalStorage, if available. This can be an effective way to store messages for later use, if needed. For example, if you get an error later in the running of your application, wouldn’t it be nice to upload a set of log messages that happened before the error on the client side, along with the actual error? And if you didn’t have an error, not to post anything?

For the proposed LocalStorageAppender, I leverage Store.js, a “stupid simple” micro javascript framework for interacting with LocalStorage. As with the introductory example earlier, let’s first look at the core, working code that logs to LocalStorage in JSFiddle. To see it working, check out in your browser development tools (Firebug/Chrome Developer Tools/etc) your LocalStorage resource pane to see the “Hello World” log message jsfiddle.jshell.net logging “Hello World” under a timestamped key

Let’s break down the ‘running example’ code a bit here.

Here we set up our appender object with some pretty self-explanatory functions that all log4javascript appenders need to implement.

The nuts and bolts of our appender is in the append method, so let’s look at that

The getFormattedMessage function is the same as the one used for the BrowserConsoleAppender - nothing particularly special for our use case. You’ll see that I then use Store.js to see if the browser supports LocalStorage, and if it does I store the message in LocalStorage with a timestamped key. You could certainly be more fancy with how you determine what your ‘keys’ are but this assures them to be unique (obviously important) and provides a simple, (hopefully) understandable example.

You’ll then see an example use case where one could, if the loggingEvent has a level that meets a certain threshhold (you could also choose to log events that exceed a certain threshhold), one can get all messages, and format them into a string, separated by newlines, with the last log message presented in the beginning of the string, and a “log stack” of messages for all the messages scooped up from LocalStroage. Those messages are then posted to the server, and upon our success handler the LocalStorage store is cleared.

Additional items to think about when using this type of logging appender includes cleaning up the LocalStorage data you have added when it is no longer useful - for example, upon entry/exit of portions of your application.

I think using LocalStorage is a nice approach for local through production environment client-side logging as it provides a way to consistenly log the client portion of your application and selectively report back to the server when you have issues. It especially hits a nice sweet spot for mobile web applications where client-side code execution/handling can unexpectedly vary across OS’s and their various browsers.