Identify elements of a search solution

search index contains your searchable content. In an Azure AI Search solution, you create a search index by moving data through the following indexing pipeline:

  1. Start with a data source: the storage location of your original data artifacts, such as PDFs, video files, and images. For Azure AI Search, your data source could be files in Azure Storage, or text in a database such as Azure SQL Database or Azure Cosmos DB.
  2. Indexer: automates the movement data from the data source through document cracking and enrichment to indexing. An indexer automates a portion of data ingestion and exports the original file type to JSON (in an action called JSON serialization).
  3. Document cracking: the indexer opens files and extracts content.
  4. Enrichment: the indexer moves data through AI enrichment, which implements Azure AI on your original data to extract more information. AI enrichment is achieved by adding and combining skills in a skillset. A skillset defines the operations that extract and enrich data to make it searchable. These AI skills can be either built-in skills, such as text translation or Optical Character Recognition (OCR), or custom skills that you provide. Examples of AI enrichment include adding captions to a photo and evaluating text sentiment. AI enriched content can be sent to a knowledge store, which persists output from an AI enrichment pipeline in tables and blobs in Azure Storage for independent analysis or downstream processing.
  5. Push to index: the serialized JSON data populates the search index.
  6. The result is a populated search index which can be explored through queries. When users make a search query such as “coffee”, the search engine looks for that information in the search index. A search index has a structure similar to a table, known as the index schema. A typical search index schema contains fields, the field’s data type (such as string), and field attributes. The fields store searchable text, and the field attributes allow for actions such as filtering and sorting. 

What is Azure AI Search?

Azure AI Search results contain only your data, which can include text inferred or extracted from images, or new entities and key phrases detection through text analytics. It’s a Platform as a Service (PaaS) solution. Microsoft manages the infrastructure and availability, allowing your organization to benefit without the need to purchase or manage dedicated hardware resources.

Azure AI Search features

Azure AI Search exists to complement existing technologies and provides a programmable search engine built on Apache Lucene, an open-source software library. It’s a highly available platform offering a 99.9% uptime Service Level Agreement (SLA) available for cloud and on-premises assets.

Azure AI Search comes with the following features:

  • Data from any source: accepts data from any source provided in JSON format, with auto crawling support for selected data sources in Azure.
  • Multiple options for search and analysis: including vector search, full text, and hybrid search.
  • AI enrichment: has Azure AI capabilities built in for image and text analysis from raw content.
  • Linguistic analysis: offers analysis for 56 languages to intelligently handle phonetic matching or language-specific linguistics. Natural language processors available in Azure AI Search are also used by Bing and Office.
  • Configurable user experience: has options for query syntax including vector queries, text search, hybrid queries, fuzzy search, autocomplete, geo-search filtering based on proximity to a physical location, and more.
  • Azure scale, security, and integration: at the data layer, machine learning layer, and with Azure AI services and Azure OpenAI.

Discover ways to use AI at work

Explore ways to use AI at work

Microsoft 365 Copilot Chat is a shared chat experience within Microsoft 365 Copilot, and is available at no additional cost with eligible Microsoft 365 and Office 365 licenses. With a Microsoft 365 Copilot license, Copilot Chat becomes more powerful by connecting to your work data across Microsoft 365 applications like Teams, Word, Outlook, PowerPoint, and Excel. This helps connect your questions with business information and apps, bringing out important details from the company’s data.

Some common examples include:

  • Outlook: Summarize the content of a large email thread.
  • PowerPoint: Turn a text-heavy slide into concise bullet points for greater clarity.
  • Word: Rewrite a paragraph in a different tone or style.
  • Teams: Summarize meetings and chat threads.

 

To learn more, read Introducing Copilot agents and watch the short video Copilots & Agents.

The real power of Microsoft 365 Copilot comes from the flow of using it across apps. There are many use cases for Copilot in Microsoft 365, including in communications, HR, legal, IT, sales, project management, market research, finance, and more. The Copilot scenario library can help you get started.

The following table describes one sample functional scenario, Using Copilot to draft an internal communications post.

What is AI?

AI can simplify everyday tasks and enhance productivity. This unit introduces the basics of AI, including how generative AI creates new content, and how tools like Microsoft Copilot can enhance productivity. You also explore the importance of using AI responsibly to ensure fairness, transparency, and trust.

What is AI, and how does it work?

Generative AI focuses on creating new, unique content, based on the input you provide. This input is called prompting, which just means asking AI for specific things. Generative AI can even produce creative content, such as writing poems, composing melodies, or designing graphics, based on the patterns and styles it learned from existing data.

People usually interact with generative AI built into a chat application. One example of such an application is Microsoft Copilot, an AI-powered productivity tool designed to enhance your work experience by providing real-time intelligence and assistance. In other words, it’s a smart tool that helps you work better by giving you quick answers and help when you need it.

Explore the potential and limitations of AI tools

AI has rapidly become an integral part of our day-to-day lives. It is revolutionizing the way that trainers work, communicate, and interact with technology. From personalized news feeds to autocomplete email suggestions and online meeting transcriptions, AI has shown immense potential for improving productivity and efficiency. However, even as you embrace this technology, it’s crucial for you to understand its capabilities and limitations and to learn how to use it responsibly.

Incorporating AI into work creates opportunities. AI functions as a Microsoft Copilot, offering suggestions, insights, and solutions to improve work. It streamlines operations, freeing up your time for higher-level thinking and innovation.

AI, like all transformative technologies, has its unique strengths and areas for improvement. Recognizing these aspects can help you leverage AI more effectively. Here are some areas where AI continues to evolve and improve:

  • Data accuracy and diversity. AI systems are trained on data from various sources, which may contain inaccuracies, systemic biases, and societal biases. This challenge also presents an opportunity for continuous learning and improvement in AI systems. By critically evaluating AI-generated content, we can contribute to the refinement of these systems.
  • Understanding context. While AI might find it challenging to analyze content with humor, sarcasm, and irony, it’s worth noting that this is an area of active research and development. The ability of AI to operate based on patterns and data is a strength that can be harnessed while being mindful of these nuances.
  • Language and regional adaptability. Some AI interfaces may have language and regional limitations. However, this also highlights the ongoing efforts in expanding the capabilities and language support of AI tools. For instance, while Microsoft Copilot in Microsoft 365 may currently process instructions primarily in English, advancements are continually being made to broaden its linguistic range.

AI is a powerful tool transforming work and life. Despite limitations, challenges present opportunities for growth and innovation. Using AI responsibly and being mindful of limitations can harness its potential to enhance lives and work. Here are ways to use AI responsibly, even with limitations:

  • Understand the tools you use. Reading product transparency notes and understanding the capabilities and limitations of AI tools can help make informed decisions.
  • Review and verify. Always review and verify AI-generated content for accuracy, especially when dealing with critical information. Double-check information when necessary.
  • Embrace responsible AI principles. Responsible AI principles include reliability, safety, privacy, inclusiveness, transparency, and accountability. Using AI in a way that respects privacy, avoids discrimination, and promotes fairness is important.

Cognitive Engagement with AI Tools

Generative AI can enhance creativity, productivity, and skills. Integrating AI into workflows frees up time to promote learner engagement. Learners’ cognitive engagement decreases during passive activities but increases during interactive activities. Deeper cognitive engagement leads to higher level learning outcomes. The table below lists learning types, their definitions, and examples of learner activities that correspond to those learning types.AI-powered tools can also enhance learner engagement by providing immersive and hands-on learning experiences. AI tools can bring abstract concepts to life, encouraging active participation and promoting deeper understanding. Furthermore, AI-powered educational games and quizzes can make learning more fun and interactive, increasing learner enthusiasm and participation. Examples of AI-powered tools include Reading Coach, Search Coach, and Speaker Coach by Microsoft.

Speaker Coach is a tool designed to support learners in improving their presentation abilities. It offers learners opportunities to practice their presentation and receive feedback on their performance, highlighting both their strengths and areas for improvement. This helps learners make their presentations clearer and avoid talking too much or just reading from slides.

Reading Coach is a tool designed to support personalized training approaches. It offers interactive activities and exercises to improve learners’ reading skills in an engaging way. Reading Coach allows the creation of targeted practice exercises specific to learners’ struggles, enabling the teaching of pronunciation and fluency through fun activities.

Search Coach is a tool designed to provide relevant information to improve online searches. With Search Coach, learners can navigate online resources confidently and discern credible sources from unreliable ones.

Follow the responsible AI standard playbook

Technological advances make artificial intelligence an integral part of our daily lives. While AI opens many opportunities and possibilities, it’s also prone to errors that can lead to harm. To protect users from these potential risks, practical guidelines are needed that help steer the creation and application of AI toward more beneficial and fair outcomes.

Microsoft’s Responsible AI Standard Playbook was developed to bridge the policy gap in AI, providing concrete guidance for upholding the company’s AI principles. This living document, now in its second version, is part of an ongoing effort to refine AI norms and practices. It’s designed to evolve with new insights and regulations, contributing to the global dialogue on responsible AI development. Microsoft encourages collaboration across sectors to further this initiative, emphasizing the need for principled, actionable standards in AI deployment.

The Responsible AI Standard also helps users determine whether an AI system is created and implemented with responsible AI principles in mind. It’s composed of two key aspects:

Goals: Goals or outcomes are the conditions that must be achieved in creating an AI system. These goals help break down the six responsible AI principles like “accountability” into specific goals such as impact assessment, data governance, and human oversight.

Use guidelines for human-AI interaction

AI systems are changing the way you work with technology, opening new possibilities and convenience. These systems can bring a dynamic element to your work, but this dynamism can sometimes lead to unpredictability. To help make your experience of AI positive and human-centered, Microsoft has established guidelines for human-AI interactions.

The guidelines provide recommendations for creating meaningful AI-infused experiences that leave you in control and that respect your values, goals, and attention. The guidelines are grouped into four categories.Setting Expectations: The first category emphasizes the importance of clarity regarding the AI system’s capabilities and performance. It’s essential to articulate what the system can do and how well it can perform these tasks. This level of transparency helps users understand the AI system’s limitations and expected performance, setting the stage for realistic expectations and trust in the system. Providing concrete examples can further illustrate the system’s practical applications and functionalities.

Contextual Relevance: The second category focuses on the AI system’s ability to provide timely, contextually relevant services that are socially and culturally appropriate. The system should be designed to adapt to a variety of cultural and social contexts, recognizing and respecting diversity to ensure inclusivity. Actively identifying and reducing biases in AI algorithms is also crucial to maintain fairness and equity in the system’s operations.

Error Handling: The third category deals with planning for scenarios when the AI system is incorrect. It’s important to have robust error-handling mechanisms in place, allowing users to easily dismiss and correct inaccurate services. This empowers users to maintain control over the AI system and fosters trust, knowing they can intervene and rectify issues as needed.

Adaptive Learning: The fourth category highlights the necessity for the AI system to learn and adapt over time based on user feedback. Users must be able to teach the implemented AI system through granular feedback and have global controls to customize the system’s monitoring and operations. It’s also important for the system to inform users about updates to its capabilities, ensuring that the AI remains a helpful and relevant tool that evolves with the users’ needs.

Building on the foundation of human-AI interaction guidelines, we transition to the Microsoft Responsible AI Standard Playbook. This playbook represents a pivotal evolution in Microsoft’s approach to AI, addressing the complexities and challenges that come with integrating AI into our daily routines. While the guidelines for human-AI interaction focus on the immediate interface between humans and technology, the Responsible AI Standard broadens the scope to encompass the ethical framework that governs AI development and deployment.

Load data for exploration

Loading and exploring data are the first steps in any data science project. They involve understanding the data’s structure, content, and source, which are crucial for subsequent analysis.

After connecting to a data source, you can save the data into a Microsoft Fabric lakehouse. You can use the lakehouse as a central location to store any structured, semi-structured, and unstructured files. You can then easily connect to the lakehouse whenever you want to access your data for exploration or transformation.

Load data using notebooks

Notebooks in Microsoft Fabric facilitate the process of handling your data assets. Once your data assets are located in the lakehouse, you can easily generate code within the notebook to ingest these assets.

Consider a scenario where a data engineer has already transformed customer data and stored it in the lakehouse. A data scientist can easily load the data using notebooks for further exploration to build a machine learning model. This enables work to start immediately, whether that involves additional data manipulations, exploratory data analysis, or model development.

Let’s create a sample parquet file to illustrate the load operation. The following PySpark code creates a dataframe of customer data and writes it to a Parquet file in the lakehouse.

Apply critical thinking when using generative AI

Generative AI models used for content generation are trained on large amounts of data from various sources. The content generated by generative AI models might have a machine learning bias. Machine learning bias happens when an AI model generates biased content due to inaccuracies in the data used for training the model. Ensuring accuracy, relevance, and impartiality in content requires critical thinking skills.

Critical thinking is the ability to analyze, evaluate, and improve your own reasoning. This skill is essential while utilizing generative AI. Applying critical thinking helps to verify, interpret, and improve the content you create and consume.

Critical thinking consists of:

  1. Interpretation: Drawing inferences beyond the literal meaning of content generated by AI tools. For example, learners might read a description of a historical period and infer why people behaved the way they did during that time.
  2. Analysis: Identify the parts of a whole and their relationships to one another. For example, learners might investigate local environmental factors to determine which are most likely to affect migrating birds.
  3. Synthesis: Identify relationships between two or more ideas. For example, learners might be required to compare perspectives from multiple sources.
  4. Evaluation: Judging the quality, credibility, or importance of data, ideas, or events. For example, learners might read different accounts of a historical event and determine which ones they find most credible.

Users can produce good quality AI-generated content easily, quickly, and responsibly by using critical thinking. Here are a few steps you can take to ensure you use generative AI tools responsibly.

  1. Accuracy check: Double-checking facts for accuracy are essential when using generative AI tools. You can prompt Large Language Models (LLMs) to cite the sources used to generate content for your prompt. It’s important to check the cited sources to ensure they’re current, reliable, and from a reputable website.
  2. Ask questions and seek feedback: While creating and consuming generative AI content, ask yourself questions such as: What is the purpose? Who is the intended audience? How reliable are the sources and information? Asking questions and seeking feedback helps improve your understanding of the content.
  3. Compare and contrast: Utilize different parameters and descriptions for the same prompt to see if the content generated is relevant and similar. Use critical thinking skills to interpret the results from your prompt. Reflect on your own critical thinking skills and assess how you analyzed and evaluated the different answers.
  4. Refer to the content policies: AI tool creators publish guidelines on how to use their tools responsibly. For example, the Microsoft content policy for Image Creator from Microsoft Designer prohibits content generation depicting child exploitation, child sexualization, adult content, human trafficking, self-harm, acts of terrorism, and violence against others. In summary, these guidelines aim to ensure the usage of the Microsoft Image Creator tool while contributing towards cultivating a safer online environment.
  5. Legal requirements: Be informed about legislative changes on AI tool use in your work, and disclose the use of AI tools in content generation when required.