Question
How can I join https://github.com/Microsoft/Cognitive-Vision-Android with https://github.com/Azure-Samples/Cognitive-Speech-TTS so that the result of the analyzed image is read out by the text to speech API. As
How can I join https://github.com/Microsoft/Cognitive-Vision-Android with https://github.com/Azure-Samples/Cognitive-Speech-TTS so that the result of the analyzed image is read out by the text to speech API. As I was looking to make an app which combined both technologies to help blind people familiarise themselves with certain things by letting them know what it is. Can you please send me the file/folder with the solution by providing a link to it and how and where I can use it as I'm pretty new to android app development. I was told by a Microsoft developer that: Seems like it shouldn't be difficult to copy-paste something like this - https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/Samples-Http/Android/TTSSample/app/src/main/java/com/microsoft/ttshttpoxford/ttssample/TTSHttpOxfordMainActivity.java into your app and send the output of VisionServiceRestClient.describe.
When someone provides the code I need to know where to add it and test it that it works. So basically I am trying to create a similar app to Seeing AI but need help with having the resulted image that is identified to be read out instead of just text. That is where https://github.com/Azure-Samples/Cognitive-Speech-TTS comes in as this can do that but need to figure out how to add that to my code https://github.com/zallia2/iSee All help is well appreciated. If it's too hard to discuss everything via one solution. Thanks
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started