All posts by Amalan(Batzee)

About Amalan(Batzee)

I am a passionate Mobile App developer. Loves to write on Mobile Application Development and IOT development. Sometime writes on management as Well.

Lets Write Code for MyO Armband – Android

Myo Armband is kind of a Wearable technology which senses the electrical signals produced when the muscles move. It got EMG Sensor, Gyroscopes and Accelerometer(Check here for Exact Spec 11137159_10204486542423879_2960066497525238836_n

So today we will see how to write code to get data and do stuff accordingly from the MyO Armband from the electrical signals that it can sense.

When the MyO Band is launched first, there was no instructions on how to setup the environment for the Android Studio. So there is a sample for Eclipse in Git Hub you guys can try it
gitHub-download-button

But today we are going to create and run an app that responds to all your gestures in Android Studio

Ok then will jump into the Tutorial

Step 1
First of all you have to Download the MyO SDK from the site and Extract the zip file and place it somewhere so that later you can add the path of it. You may have to register yourself before you can download it.
(You can download the SDK here)

Step 2
Create a new Project in the Android Studio, at the time of this tutorial my Compile SDK Version is API 23 Android 6.0 (Marshmallows), Build Tool version is 23.0.2, Minimum target SDK is 18. Ill be using a Galaxy s3 for testing. When it comes to Android Marshmallows you have to handle the new permission system. I’ll be explaining it in another Blog.

Step 3
Go to the build.gradle(Module:app) and add these code snippet to the dependencies

repositories {
    maven {
        url 'C:\\Users\\adh\\Desktop\\myo-android-sdk-0.10.0\\myo-android-sdk-0.10.0\\myorepository'
    }
}
compile 'com.thalmic:myosdk:0.10.+@aar'

Be careful when adding the maven url, it has to be the path to the MyO SDK, which you have downloaded earlier. And the path should go upto the level of the folder “myorepository”. In my case it was in the Desktop.

The compile line will get the library from the path provided above and sync it. The reason is unlike many other libraries the MYO is not hosted anywhere to automatically Android Studio to find it. So this is kind of a work around to build a MYO app in Android Studio.

Step 4
Next thing is in the SDK you have downloaded go to the
Eclipse –> MyoSdk–>libs and copy all the folders in it.
Capture
Be careful not to copy the “myosdk.jar”

Then go to the file location of your Android Studio Project
In the app–>src–>main create a folder called “jniLibs” and paste the folders you have copied earlier from the MYO SDK folder

Now your Project structure will look like this
Capture2

Step 5
Ok now we have done the important stuffs and the workaround to some errors we may have faced(Actually I faced those errors and found out these workarounds, all happened due to no support for the Android studio from the MYO guys )

So we go to our main activity and first we have to create a instance of Hub and initiate it. Hub is the main guy who will be listening to the signals from the Band.

So we create and initialize it in the onCreate() method

Hub hub = Hub.getInstance();
if (!hub.init(this)) {
    Log.e(TAG, "Could not initialize the Hub.");
    finish();
    return;
}

Step 6
Now we have initialized the Hub, now we have to find available MyO bands and connect to it. For that we start an activity called ScanActivity, which comes with the MyO SDK.

Intent intent = new Intent(context, ScanActivity.class);
context.startActivity(intent);

All the hard work is done by the library, you will just have to select the device shown by this activity to connect your app with

Step 7
Ok now we come to a place where we have to set something called lock policy, as this is a Gadget which will be always moving here and there we have to enable one of the 2 policies available(You can create your own policy and apply them but for now we will see the 2 default ones)
One is LockingPolicy.NONE –  this one will remove any lock policies that are available, for this example I am using this for easy to understand
and the other one is  LockingPolicy.STANDARD – This policy is a general one that is used by most of the developers, which lock the device when it detects that it is not being used, so that when you want to use it again, you have to do the unlock gesture to unlock it.

So we have to apply a Locking Policy to out app too
So the next code snippet to add will be

Hub.getInstance().setLockingPolicy(Hub.LockingPolicy.NONE);

Remember all the code I am adding are in the onCreate() mthod. And I am only showing the snippets here. In the Full Code sample you will find them nicely organized in to methods.

Step 8
The last code snippet in the onCreate is not important, but I thought its important to mention it. The MYO guys are actually getting some usage data through their API, but we can manually stop it by adding this code line

if (Hub.getInstance().isSendingUsageData()) {
    Hub.getInstance().setSendUsageData(false);
}

I am checking if the usage data is being sent and if it returns true I am stoping it by passing the parameter ‘false’ to the method setSendUsageData()

Step 9
Now we have to create a method to create a listener and to respond based on the signals that we receive and Add it to the Hub

private void createAndAddListner() {

    mListener = new AbstractDeviceListener() {
        @Override
        public void onConnect(Myo myo, long timestamp) {
            Toast.makeText(context, "Myo Connected!", Toast.LENGTH_SHORT).show();
        }

        @Override
        public void onDisconnect(Myo myo, long timestamp) {
            Toast.makeText(context, "Myo Disconnected!", Toast.LENGTH_SHORT).show();
        }

        @Override
        public void onPose(Myo myo, long timestamp, Pose pose) {
            switch (pose) {
                case REST:
                    Toast.makeText(context, "REST", Toast.LENGTH_SHORT).show();
                    break;
                case FIST:
                    Toast.makeText(context, "FIST", Toast.LENGTH_SHORT).show();
                    break;
                case WAVE_IN:
                    Toast.makeText(context, "WAVE_IN", Toast.LENGTH_SHORT).show();
                    break;
                case WAVE_OUT:
                    Toast.makeText(context, "WAVE_OUT", Toast.LENGTH_SHORT).show();
                    break;
                case FINGERS_SPREAD:
                    Toast.makeText(context, "FINGERS_SPREAD", Toast.LENGTH_SHORT).show();
                    break;
                case DOUBLE_TAP:
                    Toast.makeText(context, "DOUBLE_TAP", Toast.LENGTH_SHORT).show();
                    break;
                case UNKNOWN:
                    Toast.makeText(context, "UNKNOWN", Toast.LENGTH_SHORT).show();
                    break;
            }
        }
    };

    Hub.getInstance().addListener(mListener);
}

Here the mListner is an instance of DeviceListener Class. And you can see 3 override methods it has, onConnect, onDisconnect and the onPose

If the device is connected properly onPose is the one that gets triggered when you try to do gestures  using the MyO Arm band. I have added different Toast Messages for each of the Actions. If the action does not match any of the predefined 6 Actions it will be fall under the Action “Unknown”

Step 10
So now that we have created the function to create the mListner and attach it to the Hub now we have to call it in the onResume()

protected void onResume() {
    super.onResume();
    createAndAddListner();
}

and must not forget to detach the listener when we go out of the app so on oPause function we remove the listener

protected void onPause() {
    super.onPause();
    Hub.getInstance().removeListener(mListener);
}

So that’s it folks now you can write your own code to do stuff for each of the MYO Arm Band’s gestures detected.

Have Fun Folks
😀

For clear and clean code of this project visit the GitHub
gitHub-download-button

Advertisements

Exploring Flic Button

What is a Flic?
Flic is a wireless hardware button. Works using Bluetooth, paired with your phone. It is not rechargeable but the battery is replaceable. Can be stuck on wall or pinned in dresses, for easy access depends on your need. It can broadcast 3 functions to your phone for 3 actions, Single Click, Double Click and Press and Hold.

IMG_20160212_093850

It already has an app called flic, which got some basic day to day usable functions already defined. It is more than enough for your daily usage. But it got an API which we can use to invoke our own app or services that is running in the phone. This will enable us to develop a mobile solution which can be triggered using this button or create a service which can gather resources using the available sensors in the phone and send it to server. So we can consider this under Concept of IOT.

Ok so we will try to write something so we invoke our own functionality using the Flic Button.

Step 1
First of all this API does not work alone it needs you to install their android app and connect your Flic buttons using it. Before you start to concentrate on the API. You can download the app here

Step 2
Now you have to visit GitHub and download the Flic Library project. You can simply download it as a zip file and unzip it.  you can visit the site here

Step 3
Open the Android studio and create a new project that supports Minimum API level 19(Android 4.4), then go to File –> New –> Import Module and select the ‘fliclib-android’ from the git hub library project you have downloaded. Now you have added the library to the project structure.

Step 4
Now you have to add reference to the added library by going to  File -> Project -> Structure -> app (in the left sidebar) -> Dependencies tab -> The + button in the rightmost section -> Module dependency -> fliclib  and selecting ‘OK’

Step 5

on your main activity on the onCreate you have to set up the app credentials

FlicManager.setAppCredentials("[appId]", "[appSecret]", "[appName]");

You can get the credentials by registering your app at the Flic here

Step 6
Now after setting the App Credentials you have to grab a button from the main flic button app using this code snippet

try {
    FlicManager.getInstance(this, new FlicManagerInitializedCallback() {
        @Override
        public void onInitialized(FlicManager manager) {
            manager.initiateGrabButton(MainActivity.this);
        }
    });
} catch (FlicAppNotInstalledException err) {
    Toast.makeText(this, "Flic App is not installed", Toast.LENGTH_SHORT).show();
}

So when you have selected the button on the onActivityResult call back you will get the results, and if the grab is success, you can registerBroadcast for specific functions, based on that the Broadcast receiver we are gona write in a moment, will trigger events. In this case we are subscribing the Broadcast Receiver for UP_OR_DOWN operation and REMOVED events only

@Override
public void onActivityResult(final int requestCode, final int resultCode, final Intent data) {
    FlicManager.getInstance(this, new FlicManagerInitializedCallback() {
        @Override
        public void onInitialized(FlicManager manager) {
            FlicButton button = manager.completeGrabButton(requestCode, resultCode, data);
            if (button != null) {
                button.registerListenForBroadcast(FlicBroadcastReceiverFlags.UP_OR_DOWN | FlicBroadcastReceiverFlags.REMOVED);
                Toast.makeText(MainActivity.this, "Grabbed a button", Toast.LENGTH_SHORT).show();
            } else {
                Toast.makeText(MainActivity.this, "Did not grab any button", Toast.LENGTH_SHORT).show();
            }
        }
    });
}

So now we have configured the setting up we, have to write a BroadCastReceiver to get the calls from the button and trigger events

Step 7

Create a class called ‘BroadCastReceiverFlic’ that extends ‘FlicBroadcastReceiver’, which comes from the API project we added.
In that class in the Override method ‘onRequestAppCredentials’ you have to again setup the flic credential that you did at the main activity onCreate function.

Then as we have already registered for the UP_OR_DOWN and REMOVED broadcasts we can override these functions

@Override
public void onButtonRemoved(Context context, FlicButton button) {
    // Button was removed
}

and

@Override
public void onButtonUpOrDown(Context context, FlicButton button, boolean wasQueued, int timeDiff, boolean isUp, boolean isDown) {
    super.onButtonUpOrDown(context, button, wasQueued, timeDiff, isUp, isDown);
    if (isUp) {
        Log.d("IS UP", "True");
    } else {
        Log.d("IS DOWN", "True");
    }
}

In the final method you can trigger events based on if the button is up or down(In my case I am Logging different messages). If you have subscribed for the Broadcast service “CLICK_OR_DOUBLE_CLICK_OR_HOLD” you could override the function ‘onButtonSingleOrDoubleClickOrHold()’

Any way for the sample code I’ll do the coding for ‘”CLICK_OR_DOUBLE_CLICK_OR_HOLD”‘ broadcast

So thats it folks you guys can get the full code here
gitHub-download-button

Will Write our Own App to Trigger Mi Band

10371404_10206191040355262_3352352559338119016_n
Xiaomi Mi Band is the world’s cheapest and branded fitness tracker. So why don’t we do some experiments so that we can make the Mi Band do what we say for a change?

I’ll be doing this code session for Android using Android Studio, hope others can understand the basics

Step 1
Firs of all we start a new empty project. And add 4 buttons. This is to test 4 basic functions of the Mi Band. Then initialize the buttons and ready the the setOnclick listners.

Step 2
Add Bluetooth Permissions in the Manifestfile, else you wont be able to connect to the band 😀

<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN" /

Step 3
Add the Xiaomi Mi Band dependencyto the gradle and sync it

compile 'com.zhaoxiaodan.miband:miband-sdk:1.1.2'

Step 4

In the oncreate you have to create and initialize an instance of the MiBand Class

private MiBand miband;
miband = new MiBand(this);

Step 5

I have not done the pairing part in the code, but I assume the Mi band of yours is already paired to the device. If you have paired more than one device you can populate all the paired devices and allow the user to select one. But for the demonstration purpose I have only paired with my Mi Band so it is the one and only device returns to me

Object[] devices = BluetoothAdapter.getDefaultAdapter().getBondedDevices().toArray();
final BluetoothDevice device = (BluetoothDevice) devices[0];

So I am getting my paired device from the available devices, as I only got my Mi band paired I get the 0th device. hope you people got that part 😀

Step 6
No you have to connect to the paired device

miband.connect(device, new ActionCallback() {
    @Override
    public void onSuccess(Object data) {
        pd.dismiss();
        Log.d(TAG, "Success !!!");
        miband.setDisconnectedListener(new NotifyListener() {
            @Override
            public void onNotify(byte[] data) {
                Log.d(TAG, "Disconnected!!!");
            }
        });
    }
    @Override
    public void onFail(int errorCode, String msg) {
        pd.dismiss();
        Log.d(TAG, "connect fail, code:" + errorCode + ",mgs:" + msg);
    }
});

Step 7

So if you have successfully connected to the device. You can start invoking functions of the MiBand in the button clicks
For example : You can make it Vibrate using this code snippet

miband.startVibration(VibrationMode.VIBRATION_WITH_LED);

You can check out some more commands in the sample code available in Git HubgitHub-download-button

A Myth comes true with Google Cloud Vision API

Cloud Vision API by Google Cloud is the latest addition for the Google Cloud platform. Last week it has been made a beta release and been allowed for the developers around the world to try and experience it. 2 month of free usage is offered by the Google at the moment as a promotion.

This API is already being used in the Google Photo app. You may have already experienced it’s power. Analyzing objects in a photo, face detection, geographical location detection and fast search are some of their features.

Features

So I was able to register for it lat week and was already able to build an app. But as it was still in the beta. Faced some problem(Can be found with solution in Stack overflow) on creating the API key for Android. But was able to find a quick fix as many people are facing the issue.

But when I was going through the API and I found many awesome feature that just a myth till today. Still there is no proper documentation but you can find some of the popular features and getting started docs listed here.

Here are the High Lighted APIs listed down
vision

You can also try Google Cloud Vision API here

Pricing is also seems reasonable compared to the amount of processing that they have promised to do. It will be a big break through in the history of the image processing technology.

You can check out the Android app I have developed using the ‘FACE_DETECTION’ API, if you love selfie you will Love it. Download the Selfie Mood app here
selfie mood

Application of Locke’s Goal Setting Theory in Agile Methodology

A theory I came across recently looked to adhere to some concepts of the Agile methodology. And I felt it seems to fill some kind of a black hole in the Agile methodology. The theory is called be Locke’s Goal Setting Theory.

Locke’s Goal setting theory is about setting up goals in a way, that when they are achieved they will also turn into a great motivator to the person/team who follows it accordingly.

In my working place we follow Agile methodology to develop software. I know some of the basic Agile process. So the basic concept of Agile according to my understanding is a recursive process of software development which goes under many changes and iterations to provide what the clients really wants. Also the process is transparent to the client so the client is always updated. This methodology is developed to create a better understanding between the client and the developers so that they will come up with the exact product that the client wants, including the changes through up to the final product release.

When you try to compare the 5 key factors of the Locke’s Goal Setting Theory with Agile process, you will see that some are very much similar to each other.

Will see how much are they related to each other

Setting Goals and trying to achieving them, does not always works well for everyone. Some people achieve it and some people get stressed out or stuck and instead of getting motivated they get demotivated due to many reasons. In Agile we can consider it to the user stories created by clients , which we add into sprints as tasks and try to implement them withing that sprint time.

The 5 Factors that are need to affect the motivations according to Locke’s Goal setting theory are

Clear and Specific Goals

Challenging Goals

Handling Complex Problems

Commitment

Feedback

1 – The Goal has to be clear and specific, so in Agile we meet the client and their work environment and understand it and then we gather user stories which are then explained by the client to the developing team. This ensure that the requirement we have gathered are clear and specific.

2 – The next one is Challenge, we create sprints with user stories provided by the client, which we think we can achieve within the next sprint and we define our self a deadline. Easy challenges won’t be motivating, because they won’t feel important. So we select a set of challenging stories to achieve within the sprint. Achieving them gives us(developers) a big motivation.

3 – Handling complex problems, often there is chances of estimations made by the developers go wrong. So some times developers tend to get stuck on a problem that blocks the whole process, which leads to stress and the developers get demotivated. At times like this in Agile we split the complex task in to many sub tasks to visualize the problem in a better way so that we can handle them individually and solve them.

4 – Commitment is next, it is really important when it comes to team, we have to work together to achieve the goals. Software modules normally will be developed by different developers, which will at the end works together to provide a solution. So being committed to the goals that’s assigned to you will pave way to the ultimate team goal. Commitment is also affected by various kinds of internal and external causes. But keeping the developers committed is something that has to be handled by  the management or the team lead.

5 – Feedback is an important element. As a team lead, it’s important to give feed backs on the goals achieved by the team members and in which area they can improve. Not only the Team Lead, the client must be also trained in a way so that they give positive feedback about  the developers when they are present in a meeting or a scrum. This will enable the team members to work more efficiently and happily in future. Developers often consider positive feedback as another hidden goal. So it is the responsibility of the leads not to spoil it for the developers.

Even though the companies following Agile methodology these days does motivate the employees/developers by giving the team a lunch out or a pay increase/bonus. The motivation part is not mentioned theoretically any where. So I think if we can apply the Locke’s Goal setting theory into the Agile, we can get a better performance from the developer team with Good customer satisfaction.

My Experience with Xamarin Android

Hi folks, I have been working on Xamarin Android for the last 6 months. I normally don’t go for a third party development tools to make Android or iOS apps(I am not talking about Xamarin Mono but Xamarin Android). But the project requirement was to develop in Xamarin. The reason client said was that we can have the service layer for both Android and iOS as same. Ok that’s cool most of the service calls are handled by the service layer which is developed by our back end developer.

Some people had this mind set when it comes to Xamarin, where they compare it with other hybrid platforms like Phonegap. But the reality is Xamarin is a framework designed specially for the people who knows C#, so that they can write the Android code in C# and not have to worry about Java.  And the out put is not a web based solution, it is a native app that Xamarin out puts so it is faster like all native apps. Many C# developer I know have tried to develop apps using Xamarin and ended up complaining that, ‘Hey man its the same android code that we have to write but in C# so whats the point in it’ . So here what I am trying to say is, even though it looks or give a comfortable feel, for the C# people when they start developing they will come to understand that they need at least some basic knowledge on Android.

I tried to give an overview of what people think and what I think of Xamarin. Lets see how it feels when you try to develop and publish and app. So for C# developer its good if they knew some basic Android. Else you can still follow the Xamarin tutorials and figure out things. For Android developers if they have some basic knowledge on C# language it would be enough.

When you create an android project. It creates the file structure exactly like in Eclipse or Android Studio giving you a good first impression. Then you can add Activities and Layouts as usual. Here the layout xmls come with the extension axml. The xml we create as resources are normal xmls. When you start developing you will start to see the differences.
For examples the edit text component has a method called setText

editText.setText("String to Show")

but in Xamarin android, its not a method but a property in that class, so you simply set the value like

editText.Text =  "String to Show";

These kind of changes, you will see a lot. And when it comes to libraries. You have to Go for the Xamarin Components. Where there is only a handful of components to choose from. Most of the very popular ones are being added by the Xamarin people and popular companies. But still, at some times you have to create a component project by looking into a java library project available for normal android. But you have lots of other NuGets to use  for other simple things you find hard to handle in Android.

And one of the important think on updates are, that Google go on updating it’s v4, v7  libraries and Xamarin is slow on updating their system to support it. So When you are using libraries or components, you have to have a good knowledge on the component and their dependencies to manage all of the features you want from them. So in some cases you have to use old versions of components.

I have no problems in building and running a debug apk, but when I had to release a release build I had to face a lot of problems. One of them is the problem with the dependency versions which I have explained above. On one occasion I have to remove all the Google Play Services components and have to find an alternative way to handle Google play services which I manages to do with some http get/post requests.

So what I will always recommend is, if you are an Android developer unless it is a must don’t go for the Xamarin. Else if you are a C# developer, you will get used to the Xamarin and hopefully Microsoft will acquire it and will make it more wonderful in future.

How to Generate a PDF in Xamarin Android

It may sound simple, but when it comes to Xamarin, all the problems starts to emerge. So basically Xamarin does not provide any special class for PDF generation.

Failed Solution 1
So we were let alone to use the Android PDFDocument class. Which Unlike in the Native Android did not work. Actually it was giving me a White and blank document. When searching for the Xamarin Documentation what I found was the Xamarin documentation having the Android Reference Code which is very disappointing (Link to Documentation)

Here is the Code snippet for it(I may have made a mistake here, let me know if you guys found a solution for using PDFDocument class )

private String sdCardPathforPDF;
private String filePath;
private FileStream stream;

sdCardPathforPDF = Environment.ExternalStorageDirectory.AbsolutePath;
filePath = Path.Combine(sdCardPathforPDF, "MyPDF/test5.pdf");
stream = new FileStream(filePath, FileMode.Create);

tView = new TextView(context);
tView.SetTextColor(Color.ParseColor(resource.GetString(Resource.Color.black)));
tView.Text = "Hello";
tView.SetHeight(50);
tView.SetWidth(50);

var document = new PdfDocument();
var pageInfo = new PdfDocument.PageInfo.Builder(612, 792, 1).Create();

var page = document.StartPage(pageInfo);
tView.Draw(page.Canvas);
document.FinishPage(page);

document.WriteTo(stream);

stream.Close();
document.Close();

Toast.MakeText(context, "PDF Generated", ToastLength.Short).Show();

Failed Solution 2
Then I searched for a plugin or a 3rd party library, ohhh my god, I found some expensive plugins

Capture

Who would love to buy a plugin for 1600 usd??? Apitron and XFINIUM are some of them

Failed Solution 3
I found some Free open source libraries for Android, so I thought of  Java Bind it, but when i tried to Java Bind them, but I ended up getting lots of errors that i cannot fix it unless I am the developer of that library. So I dropped that Idea as well.

Failed Solution 4
Then after the disappointment of JavaBind, I came across an article saying that the iText library has been published for .Net in the name of iTextSharp. But when I added that Nuget Package I came across an error saying that

System.IO.FileNotFoundException: Could not load assembly 'System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Perhaps it doesn't exist in the Mono for Android profile?

so It says there is an assembly called System.Drawing dll is missing. I tried to manually add it but it did not work. When I researched on it, it said that Mono for Android(Xamarin Android Framework) does not include that ‘dll’ I dont know how come a popular ‘dll’ got missed out of their framework.

At last a solution that Worked
So with all of the frustration I had, I did not give up on finding a solution till i came across problem where a guy has used ‘iTextSharp’ in Xamarin Android and its working. But he did not mention or reply my question in that thread. So I thought of giving a final look at the Nuget Manager. This time just like the previous time I searched for ‘iTextSharp’ but this time I went through each and every result the list showed.
And Hoooray I found It the one I am searching for, some one has made a nuget of ‘iTextSharp’ for Xamarin, its called “Xam.iTextSharpLGPL”

XamiTextSharp

And this worked like a Magic. And this library has more functionality than the default PDFDocument.
This library is based on iTextSharp 4.1.6  which means licensed under LGPL. Free to use and it is an Open source library.
Open source project can be found in Bit Bucket : https://bitbucket.org/smarongiu/xam.itextsharplgpl

So this is the Basic code of Writing a Simple PDF Doc

System.IO.FileStream fs = new FileStream(Server.MapPath("pdf") + "\\" + "First PDF document.pdf", FileMode.Create)

// Create an instance of the document class which represents the PDF document itself.
Document document = new Document(PageSize.A4, 25, 25, 30, 30);
// Create an instance to the PDF file by creating an instance of the PDF Writer class, using the document and the filestrem in the constructor.

PdfWriter writer = PdfWriter.GetInstance(document, fs);

Before we can write to the document, we need to open it.

// Open the document to enable you to write to the document

document.Open();

// Add a simple and well known phrase to the document in a flow layout manner

document.Add(new Paragraph("Hello World!"));

// Close the document

document.Close();
// Close the writer instance

writer.Close();
// Always close open file handles explicitly
fs.Close();

I grabbed this quick code snippet from Micke Blomquist

Hope this article will help people like me in future…!
See ya…!