Chords, Conversations and the Kotlin Client Library

From our intent functions we used the provided data to call the handleChordRequest function — this is what is going to fetch the requested chord data and present it to the user.

private fun handleChordRequest(request: ActionRequest, chord: String): ActionResponse { val responseBuilder = getResponseBuilder(request) if (chord.

isNotEmpty()) { val document = getDatabase().

collection(COLLECTION_CHORDS).

document(chord).

get().

get() val chordInstructions = buildString(document?.

getString(FIELD_PATTERN) ?: "") + ".

" responseBuilder.

add( SimpleResponse() .

setDisplayText(chordInstructions) .

setTextToSpeech(chordInstructions) ) if (request.

hasCapability(Capability.

SCREEN_OUTPUT.

value)) { responseBuilder.

add( BasicCard() .

setTitle(getResource("learn_chord_title").

format(chord)) .

setImage( Image() .

setUrl(document.

getString("image") ?: "") .

setAccessibilityText( getResource("learn_chord_title").

format(chord)) ) ) responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_repeat"))) responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_teach_another"))) } return responseBuilder.

build() } else { responseBuilder.

add(ActionContext(CONTEXT_LEARN_CHORD_FOLLOWUP, 5)) val response = getResource("learn_chord_unknown_response") responseBuilder.

add( SimpleResponse().

setDisplayText(response).

setTextToSpeech(response) ) if (request.

hasCapability(Capability.

SCREEN_OUTPUT.

value)) { responseBuilder.

add(Suggestion().

setTitle("Show me available chords")) } } return responseBuilder.

build()}Most of this function uses standard firebase interactions — I use firestore to build a query, execute it and fetch the dataval document = getDatabase().

collection(COLLECTION_CHORDS).

document(chord).

get().

get()val chordInstructions = buildString(document?.

getString(FIELD_PATTERN) ?: "") + ".

"Once the data has been retrieved, we initially return a SimpleResponse to the conversation.

Next, if the device has the screen output capability then we use the response builder to return a BasicCard instance.

For this we set a title and provide an image of the chord.

Using the Image class allows us to assign a hero image to our BasicCard instance that we load an image for using data from our firestore query.

responseBuilder.

add( BasicCard() .

setTitle(getResource("learn_chord_title").

format(chord)) .

setImage( Image() .

setUrl(document.

getString("image") ?: "") .

setAccessibilityText( getResource("learn_chord_title").

format(chord)) ))You may have also noticed that we again use Suggestion chips to provide further conversation interaction points for the user.

Again, these are great to provide for most intents where a screen is available.

responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_repeat")))responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_teach_another")))Within the learn chord function you may have seen some functions which built strings for user output, I’ve provided these here just for brevity — they are not a part of the client library and just some standard kotlin code:private fun buildString(sequence: String): String { var chordSequence = "" for (index in sequence.

indices) { var note = chords[index] + " " + buildNote(sequence[index].

toString()) if (sequence[index] != 'X' && sequence[index] != '0') note += " " + sequence[index] if (index < sequence.

length – 1) note += ", " chordSequence += note } return chordSequence}private fun buildNote(note: String): String { return when (note) { "X" -> getResource("label_muted") "0" -> getResource("label_open") else -> getResource("label_fret") }}Within the conversational tool there is also an intent which can be used to list the available chords to the user.

The user might be new to guitar chords, or want to show the options that are available for them to learn.

This intent is similar to the previous, we fetch data from Firestore for display — the only difference is that we fetch all documents from the collection rather than querying for a specific chord.

@ForIntent("available.

chords")fun showAvailableChords(request: ActionRequest): ActionResponse { val responseBuilder = getResponseBuilder(request) val query = getDatabase().

collection(COLLECTION_CHORDS).

get().

get() val documents = query.

documents val rows = mutableListOf<TableCardRow>() var text = "" documents.

forEach { val displayName = it.

getString(FIELD_DISPLAY_NAME) text += "$displayName, " rows.

add(TableCardRow() .

setCells(listOf( TableCardCell().

setText(displayName), TableCardCell().

setText(it.

getString(FIELD_PACK))) )) } text = text.

substring(0, text.

length – 2) val response = getResource("available_chords_response") + text responseBuilder.

add( SimpleResponse().

setDisplayText(response).

setTextToSpeech(response)) if (request.

hasCapability(Capability.

SCREEN_OUTPUT.

value)) { responseBuilder.

add( TableCard() .

setTitle(getResource("available_chords_table_title")) .

setSubtitle(getResource("available_chords_table_description")) .

setColumnProperties( listOf(TableCardColumnProperties().

setHeader( getResource("available_chords_table_chord_header")), TableCardColumnProperties().

setHeader( getResource("available_chords_table_pack_header")))) .

setRows(rows) ) responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_teach_me_a_chord"))) } return responseBuilder.

build()}Skipping past the stuff that we’ve already covered in this article, you can see that I’ve made use of the TableCardclass.

This class allows us to define a collection of TableCardRow instances — where each defines a TableCardCell which text to be shown within the table.

val rows = mutableListOf<TableCardRow>()rows.

add(TableCardRow() .

setCells(listOf( TableCardCell().

setText(displayName), TableCardCell().

setText(it.

getString(FIELD_PACK))) ))At this point we have our rows, but we need to place them within a table.

For this we use the TableCard, assign some details to it and then set its column properties — this is the point where we assign the number of columns that our table has.

Here, each TableCardColumnProperties instance defines a table header.

We then use the setRows function to make use of the row items that we previously created.

responseBuilder.

add( TableCard() .

setTitle(getResource("available_chords_table_title")) .

setSubtitle(getResource("available_chords_table_description")) .

setColumnProperties( listOf( TableCardColumnProperties() .

setHeader(getResource("available_chords_table_chord_header")), TableCardColumnProperties() .

setHeader(getResource("available_chords_table_pack_header")))) .

setRows(rows))We again make use of the Suggestion class here to provide a way for the user to easily learn another chord.

responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_teach_me_a_chord")))As well as learning chords, we also provide the ability for the user to tune their guitar — for this we need to make use of audio files.

When the user requests to tune their guitar, we ask the note they want the assistant to play which we then present to them in an audio format.

@ForIntent("play.

note")fun playNote(request: ActionRequest): ActionResponse { val responseBuilder = getResponseBuilder(request) if (!request.

hasCapability(Capability.

MEDIA_RESPONSE_AUDIO.

value)) { val response = getResource("error_audio_playback") responseBuilder.

add( SimpleResponse().

setDisplayText(response).

setTextToSpeech(response) ) return responseBuilder.

build() } val chord = request.

getParameter(PARAMETER_NOTE) as String val document = getDatabase().

collection(COLLECTION_NOTES).

document(chord).

get().

get() val input = document?.

get(FIELD_NAME) val inputResponse = getResource("play_note_title").

format(input) responseBuilder.

add( SimpleResponse().

setDisplayText(inputResponse).

setTextToSpeech(inputResponse) ) val audioResponse = document?.

get(FIELD_AUDIO) responseBuilder.

add( MediaResponse() .

setMediaType("AUDIO") .

setMediaObjects( listOf( MediaObject() .

setName(inputResponse) .

setContentUrl(audioResponse as String) ) ) ) if (request.

hasCapability(Capability.

SCREEN_OUTPUT.

value)) { responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_play_another_note"))) } return responseBuilder.

build()}The first check that you might notice above is the surface capability check.

This intent is pretty useless unless the device has audio capability, so we perform a check here and let the user know that this is required.

if (!request.

hasCapability(Capability.

MEDIA_RESPONSE_AUDIO.

value)) {}After we perform the fetching of data from firestore (this works the same as the previous examples in this post) we make use of the MediaResponse class to build the response to be presented to the user.

Here we are required to set the type of media that is being used and attach a MediaObject instance to our MediaResponse.

We’ve already stated that our media type is AUDIO, so here we provide a name to be displayed on the card along with the the URL for the content to be played.

responseBuilder.

add( MediaResponse() .

setMediaType("AUDIO") .

setMediaObjects( listOf( MediaObject() .

setName(inputResponse) .

setContentUrl(audioResponse as String) ) ))And as per the previous intents, we again provide a Suggestion chip to provide an easy way for the user to continue their conversation.

responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_play_another_note")))Our last intent is one to handle the completion of the audio player.

When using audio responses in actions on google, the handle.

finish.

audio intent must be implemented otherwise an error will be thrown with the response.

All we do here is acknowledge the completion and offer the ability to repeat the previously played note or for another to be played.

@ForIntent("handle.

finish.

audio")fun handleFinishAudio(request: ActionRequest): ActionResponse { val note = request.

getContext(CONTEXT_NOTE_FOLLOWUP)?.

parameters ?.

get(PARAMETER_NOTE) as String val responseBuilder = getResponseBuilder(request) val inputResponse = getResource("audio_completion_response") responseBuilder.

add( SimpleResponse().

setDisplayText(inputResponse).

setTextToSpeech(inputResponse) ) if (request.

hasCapability(Capability.

SCREEN_OUTPUT.

value)) { responseBuilder.

add(Suggestion().

setTitle("Repeat $note")) responseBuilder.

add(Suggestion() .

setTitle(getResource("suggestion_play_another_note"))) } return responseBuilder.

build()}In this article we’ve taken a dive into the Kotlin Client Library for actions on Google, looking at how it’s been used to build a real-world production conversational tool.

The client library is fairly new, but it already offers a range of functionality to create conversational tools of your own.

In my next article I plan on looking at the entirety of the library, and in future we’ll look at how we can handle account linking and transactions using the client library.

In the meantime, if you have any questions then feel free to reach out!Joe Birch (@hitherejoe) | TwitterThe latest Tweets from Joe Birch (@hitherejoe).

Android Lead @Buffer.

Google Dev Expert for @Android, @GooglePay &…twitter.

com.

. More details

Leave a Reply