Blogs 

Error handling in RxJava

Once you start writing RxJava code you realize that some things can be done in different ways and sometimes it’s hard to identify best practices right away. Error handling is one of these things.

So, what is the best way to handle errors in RxJava and what are the options?

Handling errors in onError consumer

Let’s say you have an observable that can produce an exception. How to handle that? First instinct is to handle errors directly in onError consumer.

  userProvider.getUsers().subscribe(

    { users -> onGetUsersSuccess(users) },

    { e -> onGetUsersFail(e) } // Stop the progress, show error message, etc.            

  )

It’s similar to what we used to do with AssyncTasks and looks pretty much like a try-catch block.

There is one big problem with this though. Say there is a programming error inside userProvider.getUsers() observable that leads to NullPointerException or something like this. It’ll be super convenient here to crash right away so we can detect and fix the problem on the spot. But we’ll see no crash, the error will be handled as an expected one: an error message will be shown, or in some other graceful way.

Even worse is that there wouldn’t be any crash in the tests. The tests will just fail with mysterious unexpected behavior. You’ll have to spend time on debugging instead of seeing the reason right away in a nice call stack.

Expected and unexpected exceptions

Just to be clear let me explain what do I meant here by expected and unexpected exceptions.

Expected exceptions are those that are expected to happen in a bug-free program. Examples here are various kinds of IO exceptions, like no network exception, etc. Your software is supposed to react on these exceptions gracefully, showing error messages, etc. Expected exceptions are like second valid return value, they are part of method’s signature.

Unexpected exceptions are mostly programming errors. They can and will happen during development, but they should never happen in the finished product. At least it’s a goal. But if they do happen, usually it’s a good idea just to crash the app right away. This helps to raise attention to the problem quickly and fix it as soon as possible.

In Java expected exceptions are mostly implemented using checked exceptions (subclassed directly from Exception class). The majority of unexpected ones are implemented with unchecked exceptions and derived from RuntimeException.

Crashing on RuntimeExceptions

So, if we want to crash why don’t just check if the exception is a RuntimeException and rethrow it inside onError consumer? And if it’s not just handle it like we did it in the previous example?
  userProvider.getUsers().subscribe(

    { users -> onGetUsersSuccess(users) },

    { e ->

      if (e is RuntimeException) {

        throw e

      } else {

        onGetUsersFail(e)

      }

    }

  )

This one may look nice, but it has a couple of flaws:

In RxJava 2 this will crash in the live app but not in the tests. Which can be extremely confusing. In RxJava 1 though it will crash both in the tests and in the application.

There are more unchecked exceptions besides RuntimeException that we want to crash on. This includes Error, etc. It’s hard to track all exceptions of this kind.

But the main flaw is this:

During application development your Rx chains will become more and more complex. Also your observables will be reused in different places, in the contexts you never expected them to be used in.

Imagine you’ve decided to use userProvider.getUsers() observable in this chain:

Observable.concat(userProvider.getUsers(), userProvider.getUsers())

  .onErrorResumeNext(just(emptyList()))

  .subscribe { println(it) }

What will happen if both userProvider.getUsers() observables emit an error?

Now, you may think that both errors will be mapped to an empty list and so two empty lists will be emitted. You may be surprised to see that actually only one list is emitted. This is because error occurred in the first userProvider.getUsers() will terminate the whole chain upstream and second parameter of concat will never be executed.

You see, errors in RxJava are pretty destructive. They are designed as fatal signals that stop the whole chain upstream. They aren’t supposed to be part of interface of your observable. They perform as unexpected errors.

Observables designed to emit errors as a valid output have limited scope of possible use. It’s not obvious how complex chains will work in case of error, so it’s very easy to misuse this kind of observables. And this will result in bugs. Very nasty kind of bugs, those that are reproducible only occasionally (on exceptional conditions, like lack of network) and don’t leave stack traces.

Result class

So, how to design observables that return expected errors? Just make them return some kind of Result class, which will contain either result of the operation or an exception. Something like this:

data class Result<out T>(

  val data: T?,

  val error: Throwable?

)

Wrap all expected exceptions into this and let all unexpected ones fall through and crash the app. Avoid using onError consumers, let RxJava do the crashing for you.

Now, while this approach doesn’t looks particularly elegant or intuitive and produces quite a bit of boilerplate, I’ve found that it causes the least amount of problems. Also, it looks like this is an “official” way to do error handling in RxJava. I saw it recommended by RxJava maintainers in multiple discussions across Internet.

Some useful code snippets

To make your Retrofit observables return Result you can use this handy extension function:

fun <T> Observable<T>.retrofitResponseToResult(): Observable<Result<T>> {

  return this.map { it.asResult() }

    .onErrorReturn {

      if (it is HttpException || it is IOException) {

        return@onErrorReturn it.asErrorResult<T>()

      } else {

        throw it

      }

    }

}

fun <T> T.asResult(): Result<T> {

  return Result(data = this, error = null)

}

fun <T> Throwable.asErrorResult(): Result<T> {

  return Result(data = null, error = this)

}

Then your userProvider.getUsers() observable can look like this.

class UserProvider {

  fun getUsers(): Observable<Result<List<String>>> {

    return myRetrofitApi.getUsers()

      .retrofitResponseToResult()

  }

}
That’s it hope you like the article plz comment below and stay tuned for next article

Quick Intro Into Actions on Google

Google Home will finally be available in Germany on August, 8th and in France this week. I’m not aware of more announcements for other countries, but I hope and assume that availability will increase to many more countries as soon as possible.1) For me, though, getting my AIY kit was the day, I started getting interested in developing with Actions on Google.

Different types of Interfaces

Conversational interfaces is a very broad term. It covers all kind of chats whether voice is used or not up to pure voice interfaces like those used in Google Home.

Actions on Google supports text based interfaces – and depending on the capabilities of the devices – a limited set of visual feedback and touchable actions. I will cover those differences and how to detect which capabilities the device in question has, in later posts. On a mobile the text can be entered either by keyboard or by voice. With Google Home it obviously can only be entered by speaking to the device.

BTW: You can expect the assistant to appear in other devices as well. Be it IoT devices, cars or anything else where a voice interface can be useful. As you have seen in the first picture of this post, Google’s AIY kit itself uses Actions on Google (or can be made to use it). How to achieve this is also the topic of an upcoming post.

Two SDKs for The Google Assistant

When it comes to the Google Assistant, there are two very different offerings by Google:

1-The Assistant SDK and

2-Actions on Google

1-Assistant SDK

With the Assistant SDK you can enable devices to embed the Google Assistant. This means that it allows you to add the Google Assistant to a device made by you. It also allows you to change the way the Assistant is triggered on your device – for example you can use a button press instead of the “OK, Google” phrase.

The SDK is gRPC based, which is a protocol buffer based message exchange protocol. It has tons of bindings for a plethora of languages. As a sample (and for practical use as well) complete Python bindings for certain linux based architectures already exist.

If you are creating devices and want to integrate the Assistant into those, than the Assistant SDK is the SDK of your choice. The AIY kit, shown in the picture above, is running the Assistant SDK on a Raspberry Pi. I will get into this SDK in a follow-up post.

2-Actions on Google

With Actions on Google your can create apps for the Google Assistant. The remainder of this post is all about this option.

Two options to use Actions on Google

When developing apps for the Google Assistant, there too exist two options:

Use the Actions SDK directly

Use a service on top of the Actions SDK

Google gives you a recommendation as to when to use which option on its “Build an app in 30 minutes” guide.2)

Using the Actions SDK

The Actions SDK allows you to directly access the recognized user text and to deal with it in your backend. It is suited for either very simple projects with clear commands or if you are sufficiently proficient in natural language processing.

Using api.ai or other services on top of the Actions SDK

Most often using a service is the better option. It’s not that the Actions SDK itself is particularly complex. The problem lies more in how to detect what the user intends with his response and how to parse the text to get relevant data. This is where those services shine. You enter some sample responses by the user and the services then not only understands these sentences but many, many more that resemble those sentences but use different wording, a different word order or a combination of both. And they extract the data you need in an easily accessible format. Consider understanding dates – which is not even the most complex example. You have to understand “next week”, a specific date given, abbreviations, omissions and many more. That’s the real value of these services.

One such service is api.ai which was bought by Google last fall. As such it’s only natural that this service supports Actions on Google quite nicely. In addition to this you can use api.ai also for other platforms like Alexa, Cortana, Facebook Messenger and many more. I will cover api.ai thoroughly in future posts.

You are not limited to api.ai, though. One contender is converse.ai which I haven’t had the opportunity to test, yet. The visual design of converse.ai’s conversation flow has some appeal but whether it’s practical and overall as good as api.ai, I cannot tell. But hopefully I will be able to evaluate it while continuing with my Actions on Google posts.

Let’s put things into perspective

Even though conversational interfaces seem to be all the rage lately, they are not really new.

Actually they are quite old. Eliza was programmed by Joseph Weizenbaum in the sixties and created quite a stir back then. You can try it out on dozens on websites for yourself.

My first experience was the fictional interface shown in the film Wargames, 1983:3)

And of course there was Clippy in the late nineties, the worst assistant ever.

So if they are not new, why then are they all the rage? Well, luckily we have progressed from there on and nowadays we have all kind of chatbots integrated into messengers and other communication tools, we have website assistants that pop up if we ponder for a while on a particular page and we have true voice only interfaces like Amazon’s Alexa and Google Home.

And those are powered by a much better understanding of human language, of the intents of the user and how to find and combine important entities of the user’s spoken text.

The Google assistant works for voice only devices (like Google Home) or with some visual add ons on phones or other devices with a touchscreen.

Wrap up

This was a very quick rundown of the Actions on Google options. In coming posts I am going to show you the base concepts like actions, intents and fulfillment, how to make use of api.ai, what tools and libraries Google provides, how to connect to your home devices, what permissions are needed for within the Assistant and how to make use of the Assistant from within Android Things.

And in the meantime I’m going to talk about this stuff on upcoming devfests 

Stay tuned!

Top Features of Android Studio 3.0 that Make App Development More Powerful

android studio 3.0

Top Features of Android Studio 3.0 that Make App Development More Powerful

In the Google I/O Keynote developer conference on 17th May 2017, in addition to several other announcements, it has also unveiled about Android Studio 3.0, the latest version of integrated development environment (IDE), especially for the Android platform.

The key focus of this new version of Android Studio is to accelerate the flow of app development and offer the best tools built used for the Android. It comprises three key features such as a latest set of app performance profiling tools useful to diagnose the performance problems swiftly, for big sized app projects – boost Gradle build speeds and the Kotlin programming language support.

The latest Android Studio 3.0 strongly integrated with the Android Development platform having special features such as Instant App development support, Android O development new Wizards, comprises Google Play Store in the Android O Emulator system images and several more. This Android Studio 3.0, the first canary release comprises more than 20 efficient features as follows:

Support for Kotlin Programming Language

Kotlin Language Conversion
As IDE has extended support for this new programming language Kotlin, developers can add Kotlin code into their existing Android app. With the help of built-in conversion tool to convert Java file into Kotlin file or developer can also select to develop a project using a Kotlin with the help of a new project wizard. In the present day, Kotlin is one of the emerging programming languages and its inception with Android Studio becomes indeed a great announcement.

Read also: Kotlin – The Latest Powerful Language to Streamline Android App Development

Improved Layout Editor

You will find more advancement in the Layout Editor in this new release of Android Studio. You will get an advanced component tree having excellent drag-and-drop view insertion as well as a new panel for error. Moreover, in coordination with ConstraintLayout update, the Layout Editor also supports creating groups, view barriers as well as improves chain creations.

Support for Java 8 Language Features

In Android Studio, you have access to some features like instant Run for projects with the Java 8 language features. You can update your project in order to get the new Java 8 Language toolchain support; it’s required to update the source and Target Compatibility level to 1.8 in your Project Structure Dialogue.

Android Things Support

Android Things
Using a new template sets in your new project wizard and the new Module wizard using the Android Studio 3.0, you can now initiate to develop on Android Things. It lets you extend your knowledge of Android app development into the category of Internet of Things (IoT) device.

Adaptive Icon Wizard

Adaptive Icon
Adaptive launcher icons introduce by Android O can display in different shapers in different devices of Android. Using the wizards, it allows creating the new launcher icon assets and enables you to preview looks of adaptive icons on different launcher screen icon masks. You can create a new asset just by right click your mouse on the /res folder in the project and then navigate to -> New -> Image Asset -> Launcher Icons (Adaptive and Legacy).

XML Fonts & Downloadable Fonts

In Android Studio, it becomes simple with the XML fonts preview and font selection tools to add custom fonts in your Android O app. In addition, it also allows for your app to create a downloadable font resource. It helps you to avoid the need of adding a font resource in your APK. You just need to make sure that your emulator or a device is running Google Play Services v11.2.63 or higher.

Update of IntelliJ Platform

Android Studio 3.0 Canary 1 that includes the Intellij 2017.1 release. It comprises several features including Java 8 language refactoring, semantic highlighting, parameter hints, enhanced version control search, draggable breakpoints, and many others.

Instant App Support

It makes possible to create instant apps in your project using Android Studio 3.0. Instant app is basically lightweight apps that help users to run immediately without installing it. In order to support this, two new module types are introduced by Android Studio including instant app and feature. You can extend your app into an instant app by combining with the App Links Assistant and a new “Modularize” refactoring action. For using it, you need to use the New Module Wizard or you can right click on a class and navigate to: Refactor → Modularize.

Improvements

This new release has focused mainly on improving speed of projects comprise many modules. In order to support future development and achieve speed enhancements, it has made breaking API changes to the Android Gradle plugin. You should make validate compatibility using the new plugin and migrate applicable APIs, if you depended on APIs provided by the previous plugin. For testing it, in your build.gradle file, update the plugin version.

Google’s Maven Repository

maven
Now the Android Support Library Maven dependencies become distributed outside of the Android SDK Manager as a brand new Maven repository. For developing with a Continuous Integration (CI) system, it enables Maven dependency management more simple. Building CI becomes easy to manage with Google’s Maven Repository by using in combination with the command line SDK manager tool and Gradle. (You need to add https://maven.google.com to your app module’s build.gradle file in order to use the new Maven location).

Google Play System Images

With an update of Android Emulator O system image that include the Google Play Store, it makes possible to do end-to-end app testing with Google Play and also enables to keep Google Play services up-to-date in your Android Virtual Device (AVD) more conveniently. As your physical devices updates by Google Play services, the same updates you can trigger on your AVDs.

Supports of OpenGL ES 3.0

In addition to improved build speeds, this new feature makes the app development lifecycle efficient and shorter. For old emulator system images, it offers significant enhancements in OpenGL ES 2.0 graphics performance. Bug report generating becomes simpler along with a redesigned UI require for proxy settings. The latest and improved version of the Android Emulator helpful in making process for app testing efficient and keeping Google play Services up-to-date.

Improved APK Debugging

Android app developers who want to debug an APK without developing their project using Android Studio; it becomes possible to debug an arbitrary APK with this new version. It is very useful for those who had develop Android C++ code in some other development platform, but needs to debug and analyze the APK using the Android Studio. You can use this new APK debugging feature in order to analyze, profile and debug the APK, if you have a debuggable version of your APK.

In addition, you can also access the sources of your APK; link the sources to the APK debugging flow to make the debugging process high-fidelity. You can start it just by selecting debug APK or profile from the Welcome Screen or File of Android Studio -> Profile or Debug APK.

Device File Explorer

The new device file explorer allows users to view the file as well as directory structure of emulator or your Android Device. With testing your app, you can also modify app data files and preview quickly in Android Studio.

Significant Improvement in Layout Inspector

It makes easy to debug your app layout issues. The improvement includes grouping of properties into common categories. In addition, search functionality in Properties Panels and the View Tree.

APK Analyzer Improvements

The APK Analyzer comprises new significant features. In order to optimize the APK size with the help of analyzing Instant App zip files and AARs, view dex bytecode of classes and methods.

Android Profiler

You will get real-time data for your app’s memory, network and CPU activity as soon as you deploy your app to an emulator or a running device using these new profiling tools. You can make sample-based process tracing to time your code execution, view memory allocation as well as capture heap dumps and inspect the network transmitted file details.

Conclusion

With the availability of this latest version Android Studio 3.0, it has created several opportunities for Android developers by offering several new features useful to make development fast, efficient, build application powerfully as well as make the debug process convenient and several more.

Custom Listview With Rounded Circle Images.

Using this code you can display your images in round shape in list view.


ActivityMain.xml

<RelativeLayout xmlns:android=”http://schemas.android.com/apk/res/android&#8221;

xmlns:tools=”http://schemas.android.com/tools&#8221;

android:layout_width=”match_parent”

android:layout_height=”match_parent”

tools:context=”.MainActivity” >

<ListView 

android:id=”@+id/android:list”

android:layout_width=”match_parent”

android:layout_height=”wrap_content”

android:layout_alignParentLeft=”true”

android:layout_alignParentTop=”true” >

</ListView>

</RelativeLayout>

List_item.xml

<?xml version=”1.0″ encoding=”utf-8″?>

<LinearLayout

xmlns:android=”http://schemas.android.com/apk/res/android&#8221;

android:orientation=”horizontal”

android:layout_width=”fill_parent”

android:layout_height=”fill_parent”

android:padding=”8dp”>
<TextView

android:id=”@+id/title”

android:textColor=”#000″

android:layout_width=”0dp”

android:layout_weight=”1″

android:layout_margin=”10dp”

android:layout_height=”wrap_content”/>

<ImageView

android:id=”@+id/imageview”

android:layout_width=”50dp”

android:layout_height=”50dp”

android:src=”@drawable/ic_launcher”

android:layout_marginRight=”10dp”

android:contentDescription=”@string/app_name”/>
</LinearLayout>

MainActivity.java

package com.example.list;
import java.util.ArrayList;

import java.util.Arrays;

import android.os.Bundle;

import android.app.Activity;

import android.app.ListActivity;

import android.content.Context;

import android.graphics.Bitmap;

import android.graphics.BitmapFactory;

import android.graphics.Canvas;

import android.graphics.Path;

import android.graphics.Rect;

import android.view.LayoutInflater;

import android.view.View;

import android.view.ViewGroup;

import android.widget.AdapterView;

import android.widget.BaseAdapter;

import android.widget.ImageView;

import android.widget.ListView;

import android.widget.TextView;

import android.widget.Toast;

import android.widget.AdapterView.OnItemClickListener;
public class MainActivity extends ListActivity 

{

 private String[] listview_names =  {“India”,”Bangladesh”, “China”,”Indonesia” };

 static Context mcontext;
 private static int[] listview_images = 

 {

 R.drawable.india,R.drawable.bangladesh,R.drawable.china,

 R.drawable.indonesia};
 private ListView lv;

 private static ArrayList<String> array_sort;

 private static ArrayList<Integer> image_sort;

 @Override

 protected void onCreate(Bundle savedInstanceState) 

 {

  super.onCreate(savedInstanceState);

  setContentView(R.layout.activity_main);

  lv = (ListView) findViewById(android.R.id.list);

  array_sort=new ArrayList<String> (Arrays.asList(listview_names));

  image_sort=new ArrayList<Integer>();

  for (int index = 0; index < listview_images.length; index++)

  {

   image_sort.add(listview_images[index]);

  }

  setListAdapter(new bsAdapter(this));
  lv.setOnItemClickListener(new OnItemClickListener() {
  public void onItemClick(AdapterView<?> arg0,

  View arg1, int position, long arg3)

  {

   Toast.makeText(getApplicationContext(), array_sort.get(position),

   Toast.LENGTH_SHORT).show();

  }

 });

}

public static class bsAdapter extends BaseAdapter

{

 Activity cntx;

 public bsAdapter(Activity context)

 {

 // TODO Auto-generated constructor stub

 this.cntx=context;

}
public int getCount()

{

 // TODO Auto-generated method stub

 return array_sort.size();

}
public Object getItem(int position)

{

 // TODO Auto-generated method stub

 return array_sort.get(position);

}
public long getItemId(int position)

{

 // TODO Auto-generated method stub

 return array_sort.size();

}
public View getView(final int position, View convertView, ViewGroup parent)

{

 View row=null;

 LayoutInflater inflater=cntx.getLayoutInflater();

 row=inflater.inflate(R.layout.list_item, null);

 TextView tv = (TextView) row.findViewById(R.id.title);

 ImageView im = (ImageView) row.findViewById(R.id.imageview);

 tv.setText(array_sort.get(position));
 im.setImageBitmap(getRoundedShape(decodeFile(cntx, listview_images[pos ition]),200));
 return row;

}
public static Bitmap decodeFile(Context context,int resId) {

try {

// decode image size

mcontext=context;

BitmapFactory.Options o = new BitmapFactory.Options();

o.inJustDecodeBounds = true;

BitmapFactory.decodeResource(mcontext.getResources(), resId, o);

// Find the correct scale value. It should be the power of 2.

final int REQUIRED_SIZE = 200;

int width_tmp = o.outWidth, height_tmp = o.outHeight;

int scale = 1;

while (true)

{

 if (width_tmp / 2 < REQUIRED_SIZE

 || height_tmp / 2 < REQUIRED_SIZE)

 break;

 width_tmp /= 2;

 height_tmp /= 2;

 scale++;

}

// decode with inSampleSize

BitmapFactory.Options o2 = new BitmapFactory.Options();

o2.inSampleSize = scale;

return BitmapFactory.decodeResource(mcontext.getResources(), resId, o2);

} catch (Exception e) {

}

return null;

}

}

public static Bitmap getRoundedShape(Bitmap scaleBitmapImage,int width) {

 // TODO Auto-generated method stub

 int targetWidth = width;

 int targetHeight = width;

 Bitmap targetBitmap = Bitmap.createBitmap(targetWidth,

 targetHeight,Bitmap.Config.ARGB_8888);
 Canvas canvas = new Canvas(targetBitmap);

 Path path = new Path();

 path.addCircle(((float) targetWidth – 1) / 2,

 ((float) targetHeight – 1) / 2,

 (Math.min(((float) targetWidth),

 ((float) targetHeight)) / 2),

 Path.Direction.CCW);

 canvas.clipPath(path);

 Bitmap sourceBitmap = scaleBitmapImage;

 canvas.drawBitmap(sourceBitmap,

 new Rect(0, 0, sourceBitmap.getWidth(),

 sourceBitmap.getHeight()),

 new Rect(0, 0, targetWidth,

 targetHeight), null);

 return targetBitmap;

 }

}

Sending JSON Data to Server using Async Thread..

If you want to send JSON data from Android app to a server, how do you go about doing it. Follow the steps below to use Async thread to do it.

In the Android Manifest file add following permissions,

—————

<uses-permission android:name=”android.permission.INTERNET” />

<uses-permission android:name=”android.permission.ACCESS_NETWORK_STATE” />

The App Layout & Views

———————–

Main layout is LinearLayout

Three EditTextViews “FirstName, LastName & Age”

One Button to send “post” data to the server.

On button add attribute onclick and corresponds to the java function that is in current Activity

The App Layout for main_activity.xml

<LinearLayout android:layout_width=”fill_parent”

android:layout_height=”fill_parent”

android:layout_marginLeft=”16dp”

android:layout_marginTop=”16dp”

android:layout_marginRight=”16dp”

android:layout_marginBottom=”30dp”

android:orientation=”vertical”>
<EditText android:layout_width=”fill_parent”

android:id=”@+id/FirstName”

android:layout_height=”40dp”

android:hint=”First Name”

android:background=”#f3f3f3″

android:paddingLeft=”5dp”

android:layout_marginTop=”30dp”/>

<EditText android:layout_width=”fill_parent”

android:id=”@+id/LastName”

android:layout_height=”40dp”

android:hint=”Last Name”

android:background=”#f3f3f3″

android:paddingLeft=”5dp”

android:layout_marginTop=”15dp”/>

<EditText android:layout_width=”fill_parent”

android:id=”@+id/Age”

android:layout_height=”40dp”

android:hint=”Age”

android:background=”#f3f3f3″

android:paddingLeft=”5dp”

android:layout_marginTop=”15dp”

android:inputType=”number”/>

<Button android:layout_width=”190dp”

android:layout_height=”50dp”

android:layout_gravity=”center_horizontal”

android:layout_marginTop=”16dp”

android:text=”Submit”

android:textSize=”20sp”

android:id=”@+id/userinfo”

android:onClick=”senddatatoserver”/>

</LinearLayout>,

Activity

In the activity, create method to do the following:

Get reference to defined views in layout file.

Add function in activity that corresponds to layout button in function.

Get data from referenced views.

Create JSON object and add data to it with key value pairs.

Convert JSON object to string and send to Async Network background call thread.

 public class DashboardActivity extends Activity {

    EditText AgeView;

    EditText LastNameView;

    EditText FirstNameView;

    String FirstName;

    String LastName;

    String Age;

    @Override

    protected void onCreate(Bundle savedInstanceState) {

        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

       //Get reference to the defined views in layout file

        FirstNameView = (EditText) findViewById(R.id.FirstName);

        LastNameView = (EditText) findViewById(R.id.LastName);

        AgeView = (EditText) findViewById(R.id.Age);

    }

    public void senddatatoserver(View v) {

      //function in the activity that corresponds to the layout button

        FirstName = FirstNameView.getText().toString();

        LastName = LastNameView.getText().toString();

        Age = AgeView.getText().toString();

        JSONObject post_dict = new JSONObject();

        try {

            post_dict.put(“firstname” , FirstName);

            post_dict.put(“firstname”, LastName);

            post_dict.put(“firstname”, Age);

        } catch (JSONException e) {

            e.printStackTrace();

        }

        if (post_dict.length() > 0) {

            new SendJsonDataToServer().execute(String.valueOf(post_dict));

           #call to async class

        }

//add background inline class here 

class SendJsonDataToServer extends Async<String,String,String>{

}

}

Async Background Task

Helps in creating the connection in a separate thread so the UI will not freeze.

Do following steps to implement it:

Create inline class within the activity and extend it from AsyncTask<String,void,String>.

Override method doInBackground and onPostExecute.

You may notice AsyncTask has three parameters associated with it <String,void,String>.

The first one corresponds to the type of parameters passed to doInBackground, the second corresponds to type of parameters passed to onprogressUpdate function, and the third corresponds to type of parameters passed to onpostexecute function.

class SendDataToServer extends AsyncTask <String,String,String>{

    @Override

    protected String doInBackground(String… params) {       

    }    

    @Override

    protected void onPostExecute(String s) {

        }

    }

}

Network call

In the doInBackground do following things.

Create Url object for your URL.

 Open url connection from the url object.

 Set headers and output stream.

Write data to output stream.

 Send the POST request.

 Get response Inputstream, convert it to String and return it.

@Override

protected String doInBackground(String… params) {

    String JsonResponse = null;

    String JsonDATA = params[0];

 HttpURLConnection urlConnection = null;

 BufferedReader reader = null;

 try {

 URL url = new URL(“http://appliedinformatics.com/trialx&#8221;);

 urlConnection = (HttpURLConnection) url.openConnection();

 urlConnection.setDoOutput(true);

 // is output buffer writter

 urlConnection.setRequestMethod(“POST”);

 urlConnection.setRequestProperty(“Content-Type”, “application/json”);

 urlConnection.setRequestProperty(“Accept”, “application/json”);

//set headers and method

 Writer writer = new BufferedWriter(new OutputStreamWriter(urlConnection.getOutputStream(), “UTF-8”));

 writer.write(JsonDATA);

// json data

 writer.close();

 InputStream inputStream = urlConnection.getInputStream();

//input stream

 StringBuffer buffer = new StringBuffer();

 if (inputStream == null) {

 // Nothing to do.

 return null;

 }

 reader = new BufferedReader(new InputStreamReader(inputStream));

 String inputLine;

 while ((inputLine = reader.readLine()) != null)

 buffer.append(inputLine + “\n”);

 if(buffer.length() == 0) {

 // Stream was empty. No point in parsing.

 return null;

 }

 JsonResponse = buffer.toString();

//response data

 Log.i(TAG,JsonResponse);

 try {

//send to post execute

 return JsonResponse;

 } catch (JSONException e) {

 e.printStackTrace();

 }

 return null;

 } catch (IOException e) {

 e.printStackTrace();

 }

 finally {

 if (urlConnection != null) {

 urlConnection.disconnect();

 }

 if (reader != null) {

 try {

 reader.close();

 } catch (final IOException e) {

 Log.e(TAG, “Error closing stream”, e);

 }

 }

 }

 return null;

}
Thank you please comment if you like.

SEO Technical Terms

1-keyword stuffing:-

Using same keywords in excess in a webpage is known as keyword stuffing.

2-keyword density :-

Keyword density is a ratio of the number of times a word appears in a webpage to the total number of words in that page.
3-Crawl frequency :-

Crawl frequency refers to the number of times or the frequency of the search engine crawlers crawling a website. A website which is updated frequently is likely to be crawled more frequently with respect to a website updated rarely. You cannot determine when the crawlers will crawl the website next time, so you can use Fetch as Google option in Google webmaster to let crawlers now that you have updated content on the site.
4-Crawl depth:-

It is the extent up to which a crawler indexes a website. A website contains multiple pages and a page which occurs at the lower side of this page-hierarchy will have little chance of getting indexed.
5-Block comment spam:-

Blog comment spam is spamming by commenting irrelevant or copied content, posting promotional text or links in the comments of the blog. Any blog which automatically approves the comments posted by the visitors are the target of blog spammers.
6-Canonicalization:-

It is the process of converting the similar URL of a website to a standard or canonical form. E.g. whenever a user types- example.com or http://www.example.com, both redirect to the standard URL- http://www.example.com.
7-Traffic:-

Traffic refers to the number of users visiting the website. The traffic may be organic or paid. Organic traffic refers to visitor coming from clicking on link on search result page. Paid traffic means visitors comes from an ad displayed on SERP. Traffic to a website may also come from referral sites-Facebook, twitter, etc.
8-robot.txt:-

It is a text file also known as robot exclusion protocol which is uploaded on root directory of site. This file tell the google crawler which file or folder or webpage are not to be crawled or indexed.

9-Sitemap:-

Sitemap is an XML file that contains all the URLs of the website. Along with that is also contain the priority of that particular url how often it will be changed or modified. This file help to locate and crawl all the URLs of the site easily .

10-PR(PAGE RANK):-

Page rank is a ranking of page on search engine with the help of linking algorithm.

How to activate developer option for usb debugging.

Developer’s Option:-

For Activating Developer’s Option in Android we need to navigate to “Build Number” and tap 7 times. Generally,
Goto Settings > General > About phone. 
Then Scroll and Select Software information > Build number.
Now rapidly tap on ‘Build Number’ seven times and you will see the message
‘You are now a developer!’

When the Developer options are unlocked, you should see something like this.
Edit
“YOU ARE NOW A DEVELOPER”

After the 7th tap, the Developer options will be unlocked and available.You need to enable USB debugging in Developer’s Menu.



Thus summarizing:



Stock Android:
Settings > About phone > Build number



Samsung Galaxy S5:
Settings > About device > Build number



LG G3: 
Settings > About phone > Software information > Build number



HTC One (M8):
Settings > About > Software information > More > Build number