Google Assistant Home Control Actions – Introduction

During Google I/O 2017, Google launched its’ Assistant voice-enabled AI platform for iPhone. Google also released more Google Actions developer tools, including a console to configure “Home control” actions. Until now, developers like us had to fill out a form and wait for people from the Google Home Control Actions team enable a development skill.

Google Home Control actions functionally serve the same purpose as Amazon Alexa Smart Home skills – allowing users to control their smart devices, such as plugs, lights, and thermostats with voice. Much like Alexa, Google supports account linking with OAuth. Google Home Control action, also sends one of the predefined intents. Just like with Alexa Smarthome, Home Control does not support custom utterances.

Coding for processing Google actions may be hosted on Google Cloud, or anywhere else. Commands are sent as HTTP requests. Essentially, Home Control actions processing software for Google Cloud is a custom Express.JS application middleware. Our client decided to host the development code on Google Cloud and the production version on AWS Lambda. To move our Google Cloud project to AWS Lambda hosting, we used an existing piece of software – AWS Serverless Express.

Your wrapper AWS Lambda code might look like:

'use strict';

import { actionHandler } from './google_home_control';
const atob = require('atob');
const awsServerlessExpress = require('aws-serverless-express');
const awsServerlessExpressMiddleware = require('aws-serverless-express/middleware');
const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');
const compression = require('compression')
const app = express();

app.use(bodyParser.urlencoded({ extended: true }));
app.use(awsServerlessExpressMiddleware.eventContext());'/', actionHandler);

const server = awsServerlessExpress.createServer(app);

export function lambda_handler(event, context, callback) {
    console.log("LAMBDA STARTED: ");
    console.log("LAMBDA EVENT: ", event);
    console.log("LAMBDA CONTEXT: ", context);
    console.log(`EVENT BODY IS BASE64: ${event.isBase64Encoded}`);
    try {
        if(event.isBase64Encoded) {
            event.body = atob(event.body);
            event.isBase64Encoded = false;
        awsServerlessExpress.proxy(server, event, context);
    catch(error) {
        console.log("ERROR IN awsServerlessExpress: ", error);

Home Control actions API comes down to three (3) intents: SYNC – called when an account is linked using OAuth, QUERY to read the state of the devices and EXECUTE to perform control commands on behalf of the users. The SYNC intent is roughly a functional equivalent of Alexa Smart Home DISCOVERY. Unlike Alexa Smarthome API version 2, invoking different control intents depending on the desired action, Google Home control has only one control intent with specific action requests being passed in the payload, like "command": "action.devices.commands.OnOff". The device capabilities are called traits in Google terminology, for example, to support power control, devices must have action.devices.traits.OnOff.

If you have questions or need help with Amazon Alexa skills or Google Assistant actions, reach out to our experts.

Amazon Alexa vs Google Assistant vs Microsoft Cortana

During the 2017 Build event, Microsoft unveiled the promised Cortana Assistant -powered speakers and released Cortana to developers looking to introduce voice skill. We have hands-on experience with Amazon Alexa “custom” and “smarthome” skills. Our engineers were also lucky to participate in Google Assistant “Home control” actions Early Access Program, and built regular Google Assistant actions. We, however, have not had a chance to get our hands on Cortana before the official release. Let’s look at all three platforms, based on the publicly available information.

Amazon Alexa
Alexa is the first voice-enabled speaker on the market. Skills are easy to configure, through an intuitive web interface. Amazon has great documentation and paid developer support is available. There are tons of examples for programmers building both custom and smart home skills. The code could be hosted on AWS Lambda, but it is not a must. Hosting on AWS allows invoking skill processing code by ARN. If the skill is simple, with a few tricks such as keeping lambda warm with CloudWatch events, you might be able to host an extremely quick autoscaling skill while fitting onto the AWS Lambda free tier. Alexa currently supports the US and UK English, as well as German.

Google Assistant
Google Assistant is available on Google Home and Google Pixel phones, but it is coming to every Android smartphone and other surfaces. Actions execution endpoints could be hosted on Google Cloud, or elsewhere. Google Assistant actions are still less mature than Alexa skills. Documentation is less thorough, probably because Google started later in the game. Google Assistant Home control actions are still in the Early Access Program. We won’t discuss specifics until Home control has been officially released. There are however Home control actions running in production. Most major smarthome hardware manufacturers have either already introduced their products on Google Home, or in the process. Google’s one apparent advantage is many more languages than Alexa. Further, Google Assistant is designed to integrate with other Google services, starting from the calendar. We believe that Google Assistant has a lot of potential. Time will tell how Google will execute leveraging its’ strengths and making actions development partner-friendly.

Microsoft Cortana
Based on the publicly available information, there is no Alexa smart-home or Google Assistant “Home control” -style API, yet. Cortana supports more languages than Alexa, yet fewer than Google Assistant. Several manufacturers partnered with Microsoft and announced their Cortana speakers, yet none have been released. It appears that Cortana skills have to be hosted on Microsoft Azure and nowhere else.

Microsoft is a bit late to the party, but this is probably only the beginning. There are rumors of an Apple Siri speaker, which may not pan out, or we might see one come out of Cupertino.
We believe that the main battle might take place in the cockpit of your car, where speakers make even more sense.
In the meantime, please stay tuned. Once Google releases “Home control” to the public, we will tell you how we managed to build a complex product for Google Assistant, in just one week, leveraging our Alexa experience and assets.

Voice-enabling smart-home devices with Amazon Alexa (Part III)

In the previous installment of our Alexa skills development exploration, we got to the point of building smarthome and custom skills for Amazon Echo. We used tools babel and webpack to transpile modern ES6/ES7 JavaScript into an older JavaScript version supported by Amazon Lambda. Now, our project produces a single output with either the smart-home or the custom Alexa skill. We are able to manually compress the transpiled JavaScript and upload it to Amazon Web Services using the Lambda console.
We, however, want the deployment done automatically. We will configure serverless framework files to:
– compress our Lambda code,
– create a new “stack” on the AWS side, if necessary,
– copy the Lambda to Amazon and set its’ invocation trigger.

Our main serverless configuration file is serverless.yml. It has definitions split between function-level configurations and ones relevant to the specific stage and locality. It looks like:

service: ${file(./${env:DEPLOY_FILE_NAME}):service}

 name: aws
   globalSchedule: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):globalSchedule}
   roleName: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):roleName}
   profileName: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):profileName}
   - pluginHandler
   - serverless-alexa-plugin
 runtime: nodejs4.3
 cfLogs: true
 stage: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):stage}
 region: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):region}
 memorySize: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):memorySize}
 timeout: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):timeout}
 keepWarm: false
 useApigateway: false

   handler: ${file(./${env:DEPLOY_FILE_NAME_STAGE}):handler}

When settings apply to the function regardless of the deployment stage, we keep them in DEPLOY_FILE_NAME.
For example, our custom skill configuration looks like the below:
service: alexa-CustomSkillAdapter
 - src/**
 - test/**
 - webpack/**
 - dist/**
 - build/**
 - node_modules/**
 - UiTest/**
 - bower.json
 - jsconfig.json
 - karma.conf.js
 - package.json
 - pluginHandler.js
 - serverless_settings/**
 - settings_eu_customskill.yml
 - settings_eu_smarthome.yml
 - settings_us_customskill.yml
 - settings_us_smarthome.yml
 - .idea/**
 - .npmignore/**
 - .jshintrc
 - event.json
 - lambda_function_smart_home.js
 - documentation.docx
 - alexaSkill

The serverless framework documentation covers what parameters are supported. We just want to draw your attention to the events setting. Our lambda functions do not use storage or databases, and only require events that trigger the Alexa intents processing. For custom skill, this event type is alexaSkill. At the time this article is being written, serverless does not support Alexa Smart Home events, so the smarthome skill trigger needs to be set manually. The AWS CloudFormation lack of support for Smart-Home events as AWS Lambda triggers is due to a unimplemented feature, already on AWS roadmap – EventSourceToken property for the AWS::Lambda::Permission. However, if you want to automatically set the trigger now, before the feature is implemented by Amazon Web Services, it is still possible using a simple shell script. Using the aws command line tools, you could set the trigger by executing the below command:

$ aws lambda add-permission --function-name $MY_FUNCTION_NAME --action lambda:InvokeFunction --principal $PRINCIPAL --region $AWS_REGION --event-source-token $ALEXA_SKILL_ID --statement-id 8

We have now covered the design and implementation of an Alexa skills project, for connected devices. Using babel, webpack and serverless, we were able to create a single Node.JS project used to produce and automatically deploy both custom and smart-home Amazon Alexa skills. The builds script allow specifying different configurations depending on stage and locality.
If you have questions or need assistance with your Amazon Alexa projects, please contact us and maybe we could help. If you are interested in controlling smart-home devices with your voice, stay tuned. We are about to publish a post on building Google Assistant Home Control Actions.

Voice-enabling smart-home devices with Amazon Alexa (Part II)

In the previous post, we discussed setting-up Amazon Alexa skills and an environment for building and deploying an autoscaling service to host processing of the skill intents.
Let’s start coding our AWS Lambda functions to handle events sent to us by Alexa skills.
In our lambda_function_smart_home.js, we handle events sent to us by the smarthome skill. The code is below.

'use strict';

import HandlerProvider from './alexa/smarthome/handler_provider.js';
import * as utils from './alexa/common/utils.js';
import logging from './alexa/common/logging.js';
import * as exception from './alexa/common/exception.js';

var handle_event_async = async function(event, context, callback)
   // For more details on the format of the requests served by this lambda function check here:
   try {
       // Prevent someone else from configuring a skill that sends requests to this function
       var session = event.session;
       var provider = new HandlerProvider(event);
       var handler = provider.get_handler();
       var res = await handler.handle_event(event);
       callback(null, res);
   } catch (error) {
       if(error instanceof exception.SmartHomeException) {
           callback(null, error.get_error_response());
       } else {
           logging.error("Unable to return a sensible response to the user due to error: 1");
           var errorUnhandled = new exception.DriverInternalError();
           callback(null, errorUnhandled.get_error_response());

export function lambda_handler(event, context, callback) {
   handle_event_async(event, context, callback);   

In our function, actual intents are handled by the Smart Home handler class.

Let’s init our npm project, if you have no done it yet, and add packages and scripts necessary to compile our code.
First, we put together .babelrc for transpilation, below:

 "presets": [
   ["latest", { "modules": false }]
 "plugins": ["babel-plugin-add-module-exports"]

We will need the babel exports plugin for our tests, once the skill is ready. Now, we got the add some packages to our node project. We used the below version:
"devDependencies": {
   "babel-cli": "^6.18.0",
   "babel-core": "^6.21.0",
   "babel-eslint": "^7.1.1",
   "babel-loader": "^6.2.10",
   "babel-plugin-add-module-exports": "^0.2.1",
   "babel-polyfill": "^6.20.0",
   "babel-preset-latest": "^6.16.0",
   "babel-runtime": "^6.20.0",
   "webpack": "2.2.1"

Now, the packages are in place. Let’s add scripts to build, test and deploy our project. We added the following commands to our package.json file.
scripts": {
   "test": "./node_modules/mocha/bin/mocha --require babel-polyfill --no-timeouts --colors",
   "clean": "rimraf lib dist coverage",
   "build:smarthome:us:dev": "./node_modules/.bin/webpack --env.skill=smarthome --env.locale=us --env.stage=dev --config webpack/",
   "build:smarthome:all": "npm run build:smarthome:us:all && npm run build:smarthome:eu:all",
   "build:customskill:all": "npm run build:customskill:us:all && npm run build:customskill:eu:all",
   "build": "npm run build:smarthome:all && npm run build:customskill:all",
   "build:debug": "./node_modules/.bin/webpack src/app.js dist/debug/app.js --config webpack/debug/webpack.config.debug.js",
   "deploy": "sh ./"

The last script deals with serverless. We found it easier, in our case, to have a separate shell script to invoke the serverless framework executable. This allowed us to both: use cached credentials for deployment from CI tools, and to do an interactive deployment when invoked manually, with some of the parameters omitted.

Now, our code will compile and produce a single JavaScript file suitable for AWS Lambda deployment. To work locally, without having to deploy to Amazon to try every code modification, we used the app.js file that looks like the below:

'use strict';

//import {lambda_handler} from './lambda_function_custom_skill';
import {lambda_handler} from './lambda_function_smart_home';

let applianceId = "e96b94ba-da2b-.................";
let createAmazonHelpEvent = function(access_token)
   return {
     "session": {
       "sessionId": "SessionId.db5adc55-878f-43ac-a58d-..........",
       "application": {
         "applicationId": "amzn1.ask.skill.f27c2889-d3f5-....-....-........."
       "attributes": {},
       "user": {
         "userId": "amzn1.ask.account.A……………………………………………………………………………………….",
         "accessToken": access_token
       "new": true
     "request": {
       "type": "IntentRequest",
       "requestId": "EdwRequestId.b9caccff-34c7-4bcb-....-............",
       "locale": "en-US",
       "timestamp": "2016-11-03T19:56:51Z",
       "intent": {
         "name": "AMAZON.HelpIntent",
         "slots": {}
     "version": "1.0"

let request = require('request');
let {username, password} = getCredentials("us");, function (error, response, body) {
 if (!error && response.statusCode == 200) {
   let user = JSON.parse(body);
   lambda_handler(createEvent(user.access_token), null, (error, result)=>{
    console.log("DONE - lambda_handler");

In this article, we covered building barebone Amazon Alexa skills with modern JavaScript. While the code built as described in the previous installment and this post should be fully functional, provided that developers implement intent handlers, we still have not covered in detail the serverless framework configuration. We will do in our next article.

Voice enabling smart-home devices with Amazon Alexa

skills_api_logoOur client is a smart-home devices manufacturer. The company decided to voice-enable their products, starting with an ability to interact with the Amazon Echo family of devices, powered by Amazon Alexa technology.

The client had extensive expertise in smart devices field. Their competent IT department wanted an enterprise-grade solution – easy-to-maintain and built for scale.

We were going to build two Alexa skills above utilizing the Smart Home Skill and Custom Skill APIs. Smarthome API do not support and hence do not require setting-up custom utterances. Smarthome skills are also easier to invoke. However, the lack of customization limits the ability to expose some of finer features offered by specific target hardware. In the time that we have worked with Amazon on Alexa skills, Smarthome API has been enhanced several times. We expect that eventually, it will accommodate more smart-home devices and usage scenarios.

Building a basic skill is fairly easy. We used a configuration with account-linking and an OAuth endpoint authenticating devices with access tokens. While a startup may not need this, we had to accommodate a well-established development process with the product being built for multiple stages such as development, beta, production and having to deal with different API and authentication endpoints depending on the stage and locality – our product needed to support different regions.

Maintaining all of this manually would create a huge problem once the product moved from the initial development to subsequent releases. Therefore, the client had an established system in place, with automated builds and continuous integration.

Alexa skills send events, when users issue commands. One of the Alexa skill configuration parameters is the endpoint processing those events. While various cloud “resources” could be used to host the code processing Alexa requests, programmers typically use Amazon’s own Lambda service. In addition to being an auto-scaling solution, using lambda simplifies the skill configuration and alleviates the need to setup the security certificate.

lambda nodejs-logo


Amazon Lambda supports several programming languages including Java, JavaScript and Python. Our language of choice was JavaScript, because much the client’s backend was already running on Node.JS. If we were using Python, we might have opted for the zappa package. However, with Node.JS, we selected the serverless framework for configuration and deployment automation. We like serverless because of support for Amazon Lambda, as well as Google CloudFunctions and Microsoft Azure. That would come in handy when adding Google Home and Microsoft Cortana Assistant.


With Node.JS and serverless, we almost had a complete toolkit. The remaining problem was the Amazon’s lag behind Node.JS release cycles. At the time of the writing, Amazon Lambda supports only Node.JS version 4.3. Recently, it was 4.2 and the latest upgrade happened about the same time version 7 came out. Our missing ingredients were babel to transpile the code written in the new ES6/ES7 JavaScript syntax to the version Lambda accepts, and webpack to build all of our code, including modules, into a single JavaScript file.

Our project structure will be:

A source directory with Alexa client, IoT-backend client, common functionality directory and one directory for each skill type – smarthome and customskill. In the top source directory, we create three JavaScript files: smarthome_lambda_handler.js and customskill_lambda_handler.js for the two skills, and app.js to run/debug our code locally, without having to upload it to AWS Lambda.

Now, we need to configure webpack to build our code. We create a webpack directory off root of the project tree and populate it with webpack 2.2 configurations. We create one webpack configuration for each development and production builds.

We used a base config shown below:

'use strict';

var webpack = require('webpack');

module.exports = {
    output: {
        library: 'projectpure',
        libraryTarget: 'umd'
    resolve: {
        extensions: ['.json', '.jsx', '.js']
    module: {
        rules: [
            { test: /\.cfg$/, loader: 'raw-loader'},
            { test: /\.js$/, loaders: ['babel-loader'], exclude: /node_modules/ },
    target: 'node',
    externals: [ 'aws-sdk'  ],
    plugins : [
        new webpack.ContextReplacementPlugin(/moment[\/\\]locale$/, /en/)

Actual build-type configurations inherit from the base, so our development config looks like:
'use strict';

var webpack = require('webpack');
var CopyWebpackPlugin = require('copy-webpack-plugin');
var baseConfig = require('./webpack.config.base');

module.exports = function(options) {
    let skill  = options.skill;
    let locale = options.locale;
    let stage  = options.stage;
    let skillEntry = 'lambda_function_smart_home.js';
    if(options.skill !== 'smarthome') {
        skillEntry = 'lambda_function_custom_skill.js';
    baseConfig.entry = ['babel-polyfill', `./src/${skillEntry}`];
    var config = Object.create(baseConfig);
    config.output.filename = skillEntry;
    config.output.path = `./dist/${locale}/${skill}/${stage}`;
    config.plugins = config.plugins.concat([
        new webpack.DefinePlugin({
            'process.env.NODE_ENV': JSON.stringify('development')
        new CopyWebpackPlugin([{ from: `./src/config/${locale}/${stage}/config.cfg` }])
    return config;

Now, our project can be built. Let’s add code to it, and produce a basic skill then deploy to AWS Lambda using serverless framework. See how we did this, in the next post.

React-Native – First Impressions

During F8 2015, Facebook engineers announced a new platform for building native mobile apps – React-Native. Originally, only iOS support was released, but after a short delay Android support became available. React-Native extends the Flux/React libraries and architecture concepts with tools that produce native binaries. The language used to develop the code, just like with React.JS is the JavaScript. The ability to produce native-looking apps is achieved by combining JavaScript “glue-code” with plugins/controls developed with native code. In that perspective, React-Native is similar to Appcelerator.

Unlike Appcelerator, react-native has a better semantics for describing plugin interfaces, and more importantly as a React-platform react-native enforces better application architecture.

At this time, React-Native does not support Windows targets.

React-Native philosophy is upfront about the fact that no cross-platform tool is suitable for developing a single codebase for all platforms. Therefore, the React-Native team called their approach “learn once, write anywhere” instead of traditional “write once, run anywhere.” We discussed the issues that prevent apps of moderate and higher complexity to run off a single codebase while doing a practical Appcelerator overview.

At our client’s request, we used react-native developing a now top-rated financial services app, which won iTunes App of the Week upon release. We want to share our experiences with any developers thinking about giving react-native a go.

What are the requirements?

React-Native should be only considered when developers have both extensive experience developing native apps, as well as JavaScript expertise. Even if you are building a simple app, where no custom native controls will be required, it is possible that you will need to fix bugs in react-native itself. Hence, without native platform expertise, we do not recommend utilizing react-native for production apps.

What are the benefits?

  • React – Flux/React is a powerful architecture praised by leading software architects.
  • Glue code is shared between all platforms, making it easier to keep platform-specific parts synchronized.
  • App looks and feels native.
  • Tools such as Microsoft Codepush allows publishing some updates without having to wait for App Store approvals.

Is React Native ready for prime-time?

There is no simple answer to this question. There are lots of bugs in the platform, mainly in controls. Bugs are not terminal. An experienced engineer should be able to deal with those. React-Native community is rapidly growing, and often a bugfix may already exist somewhere online. In a little over one year, react-native community produced lots of open source modules offering functionality such as integration with third-party services, as well as user interface controls. If you don’t mind maintaining a custom react-native platform build, chances are that the bugs won’t stop your team. Apparently, core react-native team prioritizes features over stability. There is some UI responsiveness and snappiness degradation, on Android more than iOS.

While your mileage may vary, we believe that React-Native is a powerful platform worth at least observing.

Practical Xamarin.Forms Introduction

The cross-platform Xamarin toolset is used by developers to share code between versions of apps written multiple platforms including Android, iOS and Windows Phone. In order to further simplify development of simple forms input -driven applications, Xamarin provides a special UI kit – Xamarin.Forms. In the past, we gave a short Xamarin.Forms overview. Today, we want to dive deeper, sharing practical Xamarin.Forms tips.

Xamarin.Forms is implemented above Xamarin.iOS, Xamarin.Android and Xamarin.WinPhone. If your app can be coded using only Xamarin.Forms UI, theoretically it should allow sharing of both application logic and presentation. The same project could be built for and used on multiple platforms, without platform specific customizations.


The above image illustrates Xamarin.Forms architecture with Portable Class Libraries sitting above Xamarin.iOS, Xamarin.Android and Xamarin for Windows Phone. In a nutshell, Xamarin.Forms is a collection of editors, layout panels, navigation panels etc. In order to display controls, Xamarin utilizes a concept of a renderer. Renderer is essentially a platform-specific implementation of a cross-platform Xamarin.Forms primitive. In turn, platform-specific Xamarin controls are wrappers around native controls. For example, PCL layer’s class Button is backed by ButtonRenderer implemented in Xamarin.Android, Xamarin.iOS and Xamarin.WinPhone. A layer deeper, ButtonRenderer is rendered using a native button control – UIButton on iOS.

While learning Xamarin.Forms, we uncovered various limitations. Let’s review the issues that developers face and the strategies on mitigating some of those issues.

  • One of the first problems that we noticed while working Xamarin.Forms is the incomplete implementation of WPF templates used for defining the visual appearance of the controls.
  • Since platforms and their native controls often significantly differ, renderers have to hide some of the features in order to unify their PCL-level representation. For example, Android text field control can be styled for both single and multi -line appearance, while iOS includes two controls UITextField for single-line input and UITextView for multiline. Since one control is backed by a single renderer, Xamarin.Forms PCL text input control is always multiline.
  • Sometimes, a PCL control looks or behaves differently on each platform. For example, some Windows Phone controls, such as the Switch control have wide margins. On iOS and Android, this issue is not present. Xamarin.Forms apps using such vanilla controls will render quite differently on Android/iOS and WinPhone, which could be a problem – a view with such controls that looks perfectly fine on Android and iOS may simply not fit the screen on Windows Phone.
  • An additional abstraction layer plus aggressive re-drawing/re-rendering of the view with the contained elements makes Xamarin.Forms apps noticeably slower than their native counterparts.

While the above-mentioned problems are indeed real, Xamarin.Forms can be “tuned” to mitigate some the issues. In order to overcome limitations, we will dive into the Xamarin platform. Let us try to deal with the wide margin issue.

We make a test app displaying two switches with the look defined by the below grid.

<Grid RowSpacing=”0″>
       <RowDefinition Height=”Auto” />
       <RowDefinition Height=”Auto” />
   <Switch Grid.Row=”0“ />
   <Switch Grid.Row=“1“ />

On iPhone, our app will render as:


The appearance on Android will be similar, however on Windows Phone the app will look as:


Windows Phone switch control has a lot wider margins. Since this issue is specific to Windows Phone, we have to solve it at the Windows Phone level. While we could create a new default Windows Phone style with narrow margins, this is not a good solution since the new style would be applied to all WinPhone controls. Instead, we will adjust appearance of the switch by creating a custom switch control renderer where we can tune the visual appearance as necessary.

Control switchControl = VisualTreeHelper.GetChild(Control, 0) as Control;
Border border = VisualTreeHelper.GetChild(switchControl, 0) as Border;
Grid grid = VisualTreeHelper.GetChild(border, 0) as Grid;
grid.Height = 40;

Now, we need to return the correct size.

public override SizeRequest GetDesiredSize(double widthConstraint, double heightConstraint) {
   SizeRequest result = base.GetDesiredSize(widthConstraint, heightConstraint);
   result.Request = new Size(result.Request.Width, 40);
   return result;

Now, the margin is slimmed and our app looks as below.


Stay tuned for more Xamarin and Xamarin.Forms tips in upcoming posts.

Developing native Appcelerator modules – Part II – iOS module

Part II –  building a native Appcelerator module for iOS

Start by creating a new “Mobile Module Project.” To do this, right-click your Titanium App project in Appcelerator Studio, then select “New” and “Mobile Module Project.” Appcelerator will create an Xcode project for us.

Open the module project in Xcode and take a note that AppceleratorStudio created four Objective-C files: TiVkModuleAssets.h, TiVkModuleAssets.m and TiVkModule.h, TiVkModule.m. We won’t alter the automatically generated assets files. Some developers starting with Appcelerator modules development might get confused by Titanium documentation mentioning views and proxies necessary to implement visual elements. To make it clear, in our case, the module is only invoked through API methods. There are no buttons and/or similar. Hence, we won’t, at this time, make a use of these proxies/views.

In TiVkModule.h, we add the public properties and methods to the interface.

* TiVkModule.h
* Appcelerator module for social network VK
* Created by Diophant Technologies, OU
* Copyright (c) 2015 Diophant Technologies. All rights reserved.

#import “TiModule.h”
#import <VKSdk.h>
@interface TiVkModule : TiModule <VKSdkDelegate>
NSString *appid;
NSArray *permissions;
NSString *token;
NSString *user;

// VK permissions for public API
enum : NSUInteger
-(void) authorize:(id)sender;
-(void) deauthorize:(id)sender;
-(void) makeAPICall:(id)args;

Appcelerator Studio generated two important methods – moduleGUID and moduleId. Let’s keep those without altering. Add barebone public methods to TiVkModule.m and start building out the functionality. According to VK developer documentation, our module has to implement <VKSdkDelegate>. Take a note how our TiVkModule.h reflects this. It is also important to remember that although JavaScript methods authorize() and deauthorize() have no arguments, we need to declare Objective-C module methods with (id) argument.

The most important method is authorize. Let’s quickly code the initial implementation.

* JS example:
* var vk = require(‘ti.vk’);
* vk.appid = ‘1234567’;
* vk.permissions = [
* ];
* vk.authorize();
-(void) authorize:(id)sender
// we can only authorize for a specific app
if (nil == appid)
[self throwException:@”missing appid” subreason:nil location:CODELOCATION];

[VKSdk initializeWithDelegate:self andAppId:appid];
NSNotificationCenter * nc = [NSNotificationCenter defaultCenter];
[nc addObserver:self selector:@selector(activateApp:) name:UIApplicationDidBecomeActiveNotification object:nil];
if (![VKSdk wakeUpSession])
[VKSdk authorize:SCOPE revokeAccess:YES];
} else {
// Have a token?
if (![[VKSdk getAccessToken] isExpired])
NSMutableDictionary *event = [NSMutableDictionary dictionaryWithObjectsAndKeys:
token = [[VKSdk getAccessToken] accessToken];
[self fireEvent:@”login” withObject:event];
}, NO);

Let’s also implement more authorize methods with appropriate signatures as below.
– (void) authorizeForceOAuth:(id)sender
[VKSdk authorize:SCOPE revokeAccess:YES forceOAuth:YES];

Now, we can finish implementing <VKSdkDelegate> methods, like the below.
// called when authorization succeeded
– (void) vkSdkReceivedNewToken:(VKAccessToken*) newToken
NSMutableDictionary *event = [NSMutableDictionary dictionaryWithObjectsAndKeys:
token = [newToken accessToken];
[self fireEvent:@”login” withObject:event];

Once we implemented all methods, let’s ensure that module.xcconfig includes a reference to the utilized VKSdk.framework framework so we could build our module with the Appcelerator Studio generated script.

Add the newly created module to your Appcelerator project from Help->Install Mobile Module… and adjust tiapp.xml.

The last step is to replace the require statement and we can use the brand new ti.vk native iOS module in our app.


We can now utilize the native iOS VK module to build Appcelerator apps. In the next installment, we will learn how to make our module work on Android.

Developing native Appcelerator modules – Part I

A possibility of using native modules along with JavaScript is one of the reasons Appcelerator is selected for many projects. Modules narrow the gap between native and cross-platform development approaches. Sometimes, native modules are the only way to make your Appcelerator app do what is required.

Programming native Appcelerator modules is not difficult. At the same time, we felt that even with existing tutorials, the first time developers could use another detailed walk-through, based on a real project.

Enjoy the three-part blog post covering in detail implementation of a native Appcelerator module for the social network VKontakte, based on the VK SDK for iOS and Android.


Part I –  Designing native Appcelerator modules for VK

Why/when is a native plugin necessary? In every situation the answer in different. In our case, the app was originally made with a JavaScript module. We found a free barebone JavaScript Titanium module called ti.vkontakte on GitHub and within a couple of hours extended it with the additional functionality that we needed. The resulting module correctly worked on all supported platforms. The problem was that users were complaining about having to enter username and password via VK web authentication, instead of authorizing through the standalone VK app. In fact, we discovered that majority of end-users often do not remember their credentials. A native plugin would allow users to interact with the VK social network using a session already running on the VK app. There would be no need to enter credentials separately.

Our extended ti.vkontakte JavaScript Appcelerator module supported methods: authorize(), deauthorize(), makeAPICall(), exposed properties: appid, permissions, token and user, plus the JavaScript plugin generated various events. We were happy with JavaScript module’s API. Our goal is to rewrite the VK module in the native code, supporting one platform at a time. We decided to reimplement the JavaScript vkontakte module API keeping all signatures. This way, we would get away with only one conditional JavaScript statement to utilize native modules where available.

Further, we found a working Appcelerator module with similar functionality – ti.facebook developed for the same purpose as our module, except for Facebook. In this tutorial, we will create a new module named ti.vk based on the JavaScript module’s API, VK SDK for Android and iOS, as well as the ti.facebook Appcelerator module.

Before we begin coding the native VK module, let’s prepare by configuring VK SDK. We already had made a standalone VK app used to authorize with JavaScript. To authorize through SDK, VK requires that developers configure the app bundle field of the standalone app. The last configuration task is to setup our app’s URL-schema. We add the below section to the tiapp.xml file.


For more information about developing for VK, see the relevant developer documentation. In the meantime, we are ready to start coding our iOS and Android native Appcelerator modules. To be continued in Part II.

Xamarin.Android Performance Analysis

Ever wondered how Xamarin.Android performance stacks against code written in Java? We did.

As an introduction, let’s refresh Xamarin.Android architecture. While on Apple iOS Xamarin code is compiled into native code, on Android things work differently. Xamarin code runs inside a VM which works side-by-side with Java VM running Dalvik code. Hence, since Xamarin does not need to go through JVM, it could in theory work as quickly as even faster than Java. In reality, Xamarin apps will be larger than Java apps, since there is a need to bundle the Xamarin runtime.

  • Let’s examine overhead of Xamarin runtime. In order to do this, we built a simple “Hello World!” app. The app size is under 2 MB.
  • Startup time for native Java apps is better than for Xamarin. While a small Java application starts almost immediately, our Xamarin app takes a second.
  • Integer and floating multiplication written in Java are about 20% quicker.
  • Operations with collections are almost 10 times faster on Xamarin than on Java, because Java lacks struct value type.
  • String manipulations under Xamarin are about four (4) times faster than code written in Android-native Java.