Sunday 24 March 2024

AudioCodes Microsoft Teams Phone Manual Update

The other day I pulled out an AudioCodes C450HD that hadn’t been plugged in for a while. When I attempted to sign it into Microsoft Teams I was greeted with an error that said the Company Portal software was out of date and needed updating:

 


The phone was recommending that I should update it through the Google Play store. Given that Microsoft Teams phones don’t have the Google Play store on them, I could see that this was going to be a problem. The phone also was unable to be logged into Teams, so I wasn’t going to be able to update it using the Teams service via the regular update method. I figured that a manual process was going to be required.


After doing some searching on the web I came to realise that the manual update process for these phones was not documented anywhere that I could find. I did find a random PDF that talked about a tool called the “Teams Phone Utility” which I hadn’t come across before. Unfortunately, no amount of googling seemed like it was going to allow me to find or download this tool.


After hunting around on the AudioCodes software download site and looking through every folder on there, I was able to find a couple of different versions of firmware for the C450HD phone. These were not named in such a way that I could use to understand which software would be used for the upgrade process. In the search, I also stumbled upon a folder called the “Teams IPP Utility”, that contained a tool called the “Android Phone Tool” which did sound promising.




I downloaded a copy of the tool and after opening it, it looked just like the tool that I had seen in the PDF, it was just named differently. Now it was time to randomly guess how to use the tool. I put in the IP Address of the phone and went with the Username and Password of “admin” and clicked the "SSH Connect" button. Low and behold, it was connected:

 


As mentioned earlier, there was a couple of different types of firmware that I had found on the AudioCodes software site. Some of them were ZIP files and others were IMG files. I noticed that the tool seemed to only accept ZIP or APK file types for upgrade. I went with the ZIP file. I also noticed that there was a couple of different kinds of ZIP files. One named C450HD_AN and one there named C450HD_TEAMS.



If you open the zip file it has the following item inside:


I figured that I had better go with the TEAMS named file as the other ones may be a generic android load. So I downloaded the TEAMS version and selected it as the “Firmware file (zip)” file:


The firmware version was shown in the tool so it appeared that it could read the ZIP file and didn’t immediately fail. Now the moment of truth, I clicked the Submit button. The tool then popped a message saying “Processing the update package. This may take a few minutes”


So I waited. After a few minutes it told me that the process had completed successfully:


Nothing up until this point had happened on the screen of the phone which was a bit disconcerting. After a few more seconds a popup showed up on the screen:


After this the phone took a while and then rebooted and came up with the latest software version! Success!


Note: After upgrading the phone I noticed that there was a setting in newer versions of software for turning on SSH in the Debugging menu of the phone. You will need to turn this on in order for the tool to connect:


 

The Wrap Up

There you have it, simply keep looking and guessing and you too can find the answer to almost every problem. Hopefully this post saves you all the searching and guessing part 😊 Adios!




Read more →

Sunday 21 January 2024

What’s the Difference Between Microsoft Copilot and ChatGPT?

Introduction

In this post I go into some detail of how the different Copilots in Microsoft 365 operate in practice and show that not all the Copilots are created equal. This information could be both useful from a technical perspective but also useful from a staff training perspective. When you rollout Microsoft Copilot, people within the organisation need to understand that all the Copilots within the Office applications are not the same and are all tuned in different ways.

 

A Copilot is a Copilot is a Copilot?

When I first heard about Microsoft Copilot and saw the similar looking Copilot frame on the right side of the screen, I figured it was probably just a common interface that could access the data from the application you had open at the time. However, after actually getting the opportunity of playing with Microsoft Copilot in the various apps it has becomes clear that it is actually a lot more complex than that. Each of the Copilots within the apps has been tailored to respond in a context that makes sense for the type of application that you’re using. This has been achieved by the engineers at Microsoft, using various methods of prompt engineering and orchestration in the background.

I thought it would be useful to demonstrate the differences in the way the various Copilots in different apps respond to the exact same prompt. For this demo I have chosen an innocuous query that is not explicit and could be interpreted in different ways to see what happens. The query I chose was “Tell me about the weather in Melbourne”. This is not the kind of prompt you would really use in practice but is instead something that I’ve chosen to highlight the differences in the way each Copilot responds to the prompt.

Let's start by querying the OpenAI ChatGPT 3.5 model and see how this foundation model interprets this request. This will offer a comparison to see the difference that the exact same prompt will give when asking it of the various Copilots.

 

1. ChatGPT 3.5

You will see here that the ChatGPT foundational model has interpreted this question as a request to know the specific temperature in Melbourne right now. This is because I wasn’t explicit enough in what I had asked the model and so I didn’t get back any general information about expected temperature ranges in Melbourne. 

In setting up the ChatGPT model the OpenAI team appear to have created the system to fail gracefully in these cases where it thinks it's getting asked for data that's more current than it knows about. This is an unfortunate trait of the foundation models, they only know information up to when they were finished being trained. It is interesting that it did not respond with some more generic information about what the expected temperatures are throughout the year or historical information about the weather though (keep this in mind when we get to the Word Copilot example).

 

2. Bing Chat



Bing Chat is geared to behave much more like a web search engine. You can see in the example above that it reached out to the web and pulled back information from various websites about what the current temperature, and upcoming temperatures, will be in Melbourne. It also gave references to websites that it got this information from.

The method used here is called a Retrieval Augmented Generation (RAG) framework, where it doesn't ask the foundation model for the answer to the question directly. Instead, Bing will first retrieve some reputable sources for the kind of information being requested and provide that data as part of the prompt to the foundation model (also often referred to as Grounding the model with data). The foundation model here has been used to interpret the retrieved data instead of using its own “knowledge” from the data it was trained on. In this case, Bing is functioning as an orchestration engine that retrieves data which it compiles into an expanded prompt that will be sent to the ChatGPT mode in addition to your original query.

 

3. M365 Chat


When I asked the M365 Chat interface within Teams this question, it responded that it couldn’t find the answer to the question and recommended that I use a web search. This is because the M365 Chat Copilot uses a similar Retrieval Augmented Generation (RAG) framework to Bing. Rather than searching the Internet for information on the weather in Melbourne, it attempted a Semantic Index search (Reference: https://learn.microsoft.com/en-us/microsoftsearch/semantic-index-for-copilot) across the documents, emails, chats and other data within my Office 365 tenant. I didn’t actually have any information within my tenancy on this topic at the time. As a result, M365 Chat was unable to get any information to the pass onto the foundation model to provide an answer. What is interesting to me here, is that it didn’t just ask the foundation model to have a go at telling me about the weather in Melbourne, but instead apologised for not being able to find any documents about this.


Note: In this case, the Microsoft 365 Chat Copilot was configured to only have access to internal documents and was not enabled for searching the Internet for data. This is a setting that administrators have control over: https://learn.microsoft.com/en-us/microsoft-365-copilot/manage-public-web-access


Of course, had I have had documents that contained information on the weather in Melbourne it would have been able to answer me. Below is an example of the output when there is a document containing information about the weather in Melbourne. You will see here that the RAG model has been used to retrieve the data and the document is referenced below the response:


What is also interesting about the previous response is that this information was actually generated in Word from a later example that I ran for this blog post. The data being displayed here is actually an interpretation of information previously generated by the model. I find this to be an interesting, because when data like this keeps getting recycled through these models over time, will there start to be degradation of the quality of the information? Like a photocopy of a photocopy. Here’s an interesting article that goes into some more detail on what could be the result of this in the long term: (reference: https://cosmosmagazine.com/technology/ai/training-ai-models-on-machine-generated-data-leads-to-model-collapse/). Always take care to check the information the Copilot outputs before using the information.


 

4. Microsoft Word


Microsoft Word is usually used to create longer form documents, as a result, Microsoft has tuned the way the foundation model is prompted when you ask it questions in Word. In the example of asking it about the weather in Melbourne, the model responded with more of a Wikipedia style response, where it attempts to go into depth about what the climate is like in Melbourne throughout the year.

This is a stark difference to the way the ChatGPT foundation model tried to answer this question. This happens by design, as Microsoft realises that this is more likely what you want in a Word document rather than the wanting to know the temperature right now. The way they do this is by taking the original query and then adding additional (“system prompt”) information to it before sending it to the foundation model. This allows them to change the output to be more like what you might want in a Word document. It’s not clear exactly what Microsoft is including in the prompt that it sends to the foundation model, as you never get to see this additional information. If you play around enough with ChatGPT you can see that adding additional text like “provide an extended response similar to a reference encyclopaedia” will cause the model to give outputs more like this. I don’t believe it’s documented anywhere exactly what Microsoft add to the prompts to get these responses as the prompt engineering is a bit of secret sauce.

  

5. PowerPoint

The PowerPoint Copilot is an even more interesting topic as it doesn’t just produce text, it will also add pictures and make design choices when producing its output. You can see that for our example weather query it produced a nice picture of Melbourne’s botanical gardens and skyline, creates a meaningful heading and some dot points about the weather in Melbourne. It looks pretty impressive as an output to such a basic query:


This is all the more impressive when you have some understanding of what’s going on in the background for the PowerPoint Copilot. There is an interesting paper that I found which is produced by some of the research staff at Microsoft about how this works. It can be found here: https://arxiv.org/abs/2306.03460

TLDR: For apps like PowerPoint the Copilot needs to be able to tell the application itself how to style the page in addition to just generating text. This kind of thing can be done with scripting languages which the foundation model could be used to produce (like Github Copilot), however, this method is prone to syntax errors. The researchers at Microsoft found that it was safer to create a specialised domain specific language for describing the layout of a document (more like a declarative language like is used for Terraform or PowerShell Desired State Configuration). The language, in this case, is called Office Domain Specific Language (ODSL) and is designed to use a minimal number of tokens (words) and be easily describable as an input to a foundation model. Here’s an example of the language:

1 # Inserts new "Title and Content" slides after provided ones.

2 slides = insert_slides(precededBy=slides, layout="Title and Content")

When the prompt is sent to the model it will include schema information about what the ODSL language and what the format of the desired response. The model will then respond with a description of what each slide should look like in the desired ODSL format. The response is thoroughly checked and validated to have the right format and then translated into a lower-level language by an interpreter program which then gets executed by PowerPoint. This is both very cool and crazy that the foundation models are powerful enough to do these kinds of things.

 

6. Outlook

When you write an email your colleagues you don’t really want to be known as the person that writes the dreaded War and Peace novel length emails. Fortunately, Microsoft are aware of this and when designing the Outlook Copilot, they took this into account. The output of this Copilot is designed to produce output that looks like, in both format and content, like an email. You can see below that the simple weather in Melbourne prompt actually created what looks and reads like an email. I must admit it did take a bit of artistic licence and go on a bit more of a ramble than I would have liked in this case though:


 

7. Excel

The Excel Copilot is once again quite different than the other Copilots. Asking it the weather is not exactly what it’s supposed to be used for, but I asked it anyway, because, why not?:

In excel, the Copilot is more for creating formulas and reasoning over the data that is in your spreadsheets. In the current preview version, the Copilot will only work on data that is in a defined table. This is likely to do with the fact that the data needs to be ordered in such a way to be sent as a prompt to the foundation model. In doing this the data needs to retain all the column and row information but also keeps the token count low enough to be processed. I’m not sure if it’s clear how Microsoft could process an entire very large spreadsheet (with the potential complexity of multiple pages, and scattered data, etc) through the foundation models give their token limits currently. Until they figure this out, we may be stuck with only processing data that is in defined smaller tables for the time being.

If you are wondering what the Excel Copilot can actually do though, here’s an example of how you could ask the Excel Copilot to reason over the data in a table and give you an answer:



Also, here’s an example of how you can ask the Excel Copilot for a formula for producing a Fahrenheit column from a Celsius column:

 

 

8. Microsoft Whiteboard

The Microsoft Whiteboard Copilot has another take on what it produces based on our modest weather question. It produced a bunch of sticky notes for various things that the weather could be in Melbourne. This is more contextualized toward a brainstorming type of session which is be common when using a Whiteboard:


 


This is once again, a fun and different take on how a foundation model can be used to produce a more context aware output for the application at hand.

 

The Wrap Up

As you can see, all these Copilots across the Microsoft Office apps are all very different beasts, and this is something that people within your organisation should understand in order to get the most out of Copilot product set. This is certainly something to keep in mind when training staff on the potential use cases and determining which Copilot is right for the task at hand. Cheers!




Read more →

Popular Posts