You can access the article I shared about user centric latest Google I / O updates.

2 years ago in Google I / O, after the announcement of Kotlin’s being to support as a second language, as the time passed it became much more popular. Seeing more Kotlin specific speeches at conferences, mostly Kotlin code samples in blog posts, when we search how to do anything on the Internet, first examples’ always being Kotlin, Google’s developer.android examples’ giving Kotlin in the first tab, increasing in Kotlin codelabs, free courses at Udacity and a separate category of GDE for Kotlin… Considering all these, in Developer Keynote this year, announcement of Kotlin’s being now the first language of Android’s didn’t surprised, at least me. Of course, both the ability to see where we would go further at the point of progress and development as an individual, and the question of how can I convince my company to write Kotlin at least once asked at each conference, was answered with certainty. And I think it’s worth saying that Kotlin had already won the sympathy of the developers before the announcement that it would actually be like this, with its simplicity, boilerplate code shortening, easy learning, rapid development, and 100% compatibility with Java.
Another announcement about Kotlin was the Kotlin everywhere event series, co-produced by Google and Jetbrains. This series of activities will consist mainly of educational content codelab, speech, study jam, etc. You can find out if there is an event close to you from the site and you can join or organize.
Language support for Java and C ++ users will continue. It is a bit sad to see that Java, which has been declared as the first development language of Android for years, has been put in the same pan as C ++, but this is the case for now. 🙂

Again last year, Google I / O Jetpack announced. In fact, I think Jetpack doesn’t offer anything extremely different, but the library, tools and documentation we need to develop high quality Android applications have been gathered together, which was a big thing in itself. One of the major changes in Jetpack was that it included libraries with the androidx. * package, but they were unbundled, backward compatibility (back to API 14) and constantly updated. As announced this year, 80% of the top 1000 applications have started to use Jetpack modules.
If we talk about what’s new at Jetpack this year, the new CameraX API, a few innovations coming to Architecture Components and the very surprising Compose. I think CameraX attracted the most interested in blog post.

CameraX supports back to Lollipop and is also compatible with 90% of devices on the market. Of course, it is useful to follow the relevant sessions and examine the posts for detailed information. It is a nice feature that we can access the camera’s features, use HDR, portrait and night mode.


Continuing with Architecture components, the components announced last year (Work Manager, Navigation Controller) were finally released with version 1.0. Room came with coroutine and RxJava support. Lifecycle and Livedata modules came with coroutine integration. SavedState for ViewModel and Benchmarking, which makes it easy to test the performance of the application, are new to Architecture Components.


And Jetpack’s latest announcement is a surprise for us all: Compose, reactive, UI-programming library. If you experienced flutter a bit, it is exactly the same in logic and similar to Anko. Okay, maybe it didn’t surprise us all. If you know Turkish, I strongly recommend you to return and listen to their podcasts. Aside from the fact that they are making Android podcasts like a radio program, if you listen to 5th podcast you will find that it is not too surprising for them.

Let’s not forget Live Class Generation in Data Binding. With Android Studio 3.5, for example, when we give id to the TextView in the example below, we can access the view directly without compiling the project.

Or in a that way

Data Binding is fine fine but it would also be sweet if there was an API that collects all the checks in the table below.

Which is also called View Binding will appear with Android Studio 3.6.
If we continue with Android Studio, you’ve probably noticed the articles about Project Marble, at least on Twitter. In the last 6 months, the Android Studio team had stopped developing new features and worked to improve the existing core functionality of Android Studio, and with the solution of 400+ bugs, Android Studio 3.5 was announced. Apply Changes in the image below is actually Instant Run. Rewritten from scratch.

Another big announcement last year was the App Bundle. I wrote a detailed article after I / O. App Bundle (a publishing format) can be defined as the variation of the publishing format on the application’s abi, language, density, and the only relevant part is downloaded. Up to 30% of application size has been achieved. Dynamic Feature Modules, which was announced last year and provided on-demand feature download, released 1.0 after a long Beta period. Of course, if we haven’t done so yet, we need to make some changes in the application to take advantage of the App Bundle and to take advantage of the Dynamic Features. According to how we design our application on day 0, the workload can increase or decrease. In fact, while these features are important for the app size, on the other hand, it created effect on the distribution process. This year in-app updates was announced.
The announcement for the assistant was shared in 2 breaks, for content creators and app developers. By simply adding How to Type to the markup, the search results can be given in a more structured way and the same markup can be displayed in a similar way on the smart display. (3rd image)



Last year, it’s announced that the App Actions, which we can trigger intent directly via voice. It was also very simple to implement. However, this was a preview version and was not open to the development to all developers. This year, the following 4 categories are available for English only. To accomplish, simply prepare the actions.xml file.



If we go into AI, we can translate in 59 languages using MLKit’s 0n-device translation in our own application. The same machine learning model that feed the Google translate.

Again, we can improve the visual search experience of our application with the MLKit’s Object detection and tracking API. For example, IKEA has improved its search feature by simply pointing the camera.

And finally, Flutter for the Web

PS: If you’re here, read the whole post and liked it, please give stars and/or share on social media 🙂
Reference:
https://www.youtube.com/watch?list=PLOU2XLYxmsILVTiOlMJdo7RQS55jYhsMi&v=lyRPyRKHO8M