Monday, August 4, 2025

How to Deploy Your HTML Website on a Linux Server (Apache + SFTP)

Launching a simple HTML website on your own Linux server is easier than you might think. Whether you're sharing a static landing page or integrating HTML into a backend system like Flask, here’s a complete hands-on guide you can follow.


πŸ”₯ Method 1 — Deploying Directly to Apache Web Server

Apache by default serves files from the directory: 
/var/www/html

Step-1: Create/Verify the HTML Folder

sudo mkdir -p /var/www/html
cd /var/www/html

Step-2: Upload Your HTML File

Use SFTP tools like FileZilla, WinSCP, or sftp command-line to copy cortex.html (or your HTML file) from your local machine to /var/www/html.
πŸ’‘ You may need folder permissions from your server admin.

Step-3: (Optional) Create/Edit HTML Directly in Server

sudo vim /var/www/html/cortex.html

Common Vim commands:
:wq  Save and exit
:q!  Exit without saving
:w   Save only (don’t exit)

Step-4: Set Correct Permissions

sudo chmod 644 /var/www/html/cortex.html
sudo chmod 644 /var/www/html/index.html

Step-5: Fix File Naming (optional)

sudo mv /var/www/html/Cortex.html /var/www/html/cortex.html

Step-6: Confirm Apache DocumentRoot

DocumentRoot "/var/www/html"

Step-7: Restart Apache

sudo systemctl restart httpd      # CentOS/RHEL
sudo systemctl restart apache2    # Ubuntu/Debian

Your page should now load at:
http://<server-ip>/ or http://<server-ip>/cortex.html


Method 2 — Serve HTML Inside a Flask App on Custom Port (81)

Sometimes, we want to deploy HTML via a Flask application running behind Apache on a different port, e.g. 81.

Folder Structure

/var/www/flaskapp/
└── templates/
    ├── index.html
    └── se.html   <- (optional subpage)

Step-1: Copy the HTML into Flask Template

sudo cp /var/www/html/cortex.html /var/www/flaskapp/templates/index.html

Step-2: Add Apache VirtualHost for Port 81

<VirtualHost *:81>
    ServerName yourdomain.com
    DocumentRoot /var/www/flaskapp
    WSGIScriptAlias / /var/www/flaskapp/flaskapp.wsgi
</VirtualHost>

Step-3: Restart Apache

sudo systemctl restart httpd

Open in browser:
http://<server-ip>:81/

Updating HTML Pages Later

Upload new version using SFTP to /var/www/ and then run:

sudo cp /var/www/cortexV3.html /var/www/flaskapp/templates/index.html
sudo systemctl restart httpd

Adding New Pages

sudo cp /var/www/cortexV4.html /var/www/flaskapp/templates/se.html
sudo chmod 644 /var/www/flaskapp/templates/se.html
sudo systemctl restart httpd

Open page at: http://<server-ip>:81/se

Summary

Task                   Command (Linux)
Create HTML folder     sudo mkdir -p /var/www/html
Upload via SFTP        Use FileZilla/WinSCP or sftp
Edit file              vim /var/www/html/filename.html
Set permission         sudo chmod 644 file.html
Restart Apache         sudo systemctl restart httpd
Copy to Flask app      sudo cp source destination

Final Thoughts

With just a handful of commands and SFTP access, you can deploy static HTML pages directly with Apache or plug them into a Python-based Flask app running on a custom port. This manual method gives you full control over your deployment — perfect for learning, testing, or lightweight production websites.

How to Convert Outlook MSG Files to XML with a Simple Web App

 Outlook MSG files are commonly used to store individual email messages, including metadata, body content, and attachments. However, working with MSG files directly can be challenging when you need to integrate or process email data in other systems.

In this guide, we will build a Python-based web application that allows users to:
Upload Outlook MSG files
Convert them into structured XML format
Download the XML file
✔ With a clean HTML interface


✅ Why Convert MSG to XML?

  • Data Interoperability: XML is widely supported for system integrations.

  • Structured Format: Easily extract sender, recipients, body, and attachments in a standard schema.

  • Automation: Helps migrate email content into other tools like CRMs, ERP systems, or custom applications.


✅ Tools and Libraries Used

  • Python 3.x

  • Flask – lightweight web framework

  • extract-msg – library to read .msg files

  • HTML + CSS – for a simple and responsive UI

Install the dependencies:
pip install flask extract-msg

✅ Project Structure
msg-to-xml/
├── app.py              # Flask backend
├── templates/
│    └── index.html     # Upload & download UI
└── uploads/            # Temporary storage


✅ Step 1: Build the Backend (Flask)

Here’s the complete app.py code:


from flask import Flask, render_template, request, send_file

import os

import xml.etree.ElementTree as ET

import extract_msg


app = Flask(__name__)

UPLOAD_FOLDER = 'uploads'

os.makedirs(UPLOAD_FOLDER, exist_ok=True)


@app.route('/', methods=['GET', 'POST'])

def index():

    if request.method == 'POST':

        if 'file' not in request.files:

            return "No file part"

        file = request.files['file']

        if file.filename == '':

            return "No selected file"

        if file:

            file_path = os.path.join(UPLOAD_FOLDER, file.filename)

            file.save(file_path)


            # Convert MSG to XML

            xml_path = convert_msg_to_xml(file_path)

            return send_file(xml_path, as_attachment=True)

    return render_template('index.html')


def convert_msg_to_xml(msg_path):

    msg = extract_msg.Message(msg_path)


    root = ET.Element("Email")

    ET.SubElement(root, "Subject").text = msg.subject or ''

    ET.SubElement(root, "Sender").text = msg.sender or ''

    ET.SubElement(root, "Date").text = str(msg.date) if msg.date else ''

    ET.SubElement(root, "Body").text = msg.body or ''


    # Recipients

    recipients = ET.SubElement(root, "Recipients")

    for r in msg.recipients:

        try:

            recipient_text = f"{r.name} <{r.email}>"

        except:

            recipient_text = str(r)

        ET.SubElement(recipients, "Recipient").text = recipient_text


    # Attachments metadata

    attachments = ET.SubElement(root, "Attachments")

    for att in msg.attachments:

        att_elem = ET.SubElement(attachments, "Attachment")

        ET.SubElement(att_elem, "Filename").text = att.longFilename or att.shortFilename or ''

        ET.SubElement(att_elem, "Size").text = str(len(att.data)) if att.data else '0'


    tree = ET.ElementTree(root)

    xml_filename = os.path.splitext(os.path.basename(msg_path))[0] + ".xml"

    xml_path = os.path.join(UPLOAD_FOLDER, xml_filename)

    tree.write(xml_path, encoding="utf-8", xml_declaration=True)

    return xml_path


if __name__ == '__main__':

    app.run(debug=True)



✅ Step 2: Create the HTML UI

Save this as templates/index.html:

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="UTF-8">

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>MSG to XML Converter</title>

<style>

    body { font-family: Arial, sans-serif; background: #f4f6f9; display: flex; justify-content: center; align-items: center; height: 100vh; }

    .container { background: #fff; padding: 30px; border-radius: 12px; box-shadow: 0 4px 8px rgba(0,0,0,0.1); text-align: center; width: 400px; }

    h1 { color: #333; margin-bottom: 20px; }

    input[type="file"] { display: block; margin: 20px auto; }

    button { background: #007BFF; color: white; border: none; padding: 10px 20px; border-radius: 8px; cursor: pointer; font-size: 16px; }

    button:hover { background: #0056b3; }

</style>

</head>

<body>

<div class="container">

    <h1>MSG to XML Converter</h1>

    <form method="POST" enctype="multipart/form-data">

        <input type="file" name="file" accept=".msg" required>

        <button type="submit">Convert & Download XML</button>

    </form>

</div>

</body>

</html>


✅ Step 3: Run the App
python app.py


Open your browser and go to:
πŸ‘‰ http://127.0.0.1:5000

✅ Output Example

The generated XML file will look like this:


<?xml version="1.0" encoding="utf-8"?>

<Email>

    <Subject>Test Email</Subject>

    <Sender>sender@example.com</Sender>

    <Date>2025-07-23 12:30:00</Date>

    <Body>Hello, this is a test email.</Body>

    <Recipients>

        <Recipient>John Doe &lt;john@example.com&gt;</Recipient>

    </Recipients>

    <Attachments>

        <Attachment>

            <Filename>document.pdf</Filename>

            <Size>12345</Size>

        </Attachment>

    </Attachments>

</Email>


✅ Next Enhancements

You can add more features:

  • Include attachments in Base64 inside XML

  • HTML preview before downloading

  • Support multiple file upload and batch conversion

  • Add a progress bar with drag & drop upload

✔ Final Thoughts

This lightweight Flask app is a simple and efficient way to transform Outlook MSG files into XML, making email data more accessible for integration and analysis.

Tuesday, July 22, 2025

Connect Microsoft Graph API with Copilot Studio for email processing

 This article from Microsoft Q&A outlines how to use Microsoft Graph API to integrate Outlook email and calendar data into Copilot Studio agents. Key highlights include:

✅ Data Integration

  • Use Microsoft Graph API to access Outlook data.
  • Key endpoints:
    • /me/messages – Access emails.
    • /me/events – Access calendar events.
    • /me/mailFolders – Manage folders.

πŸ› ️ Tools & APIs

  • Microsoft Graph API – Primary API for accessing Microsoft 365 data.
  • Microsoft Graph Connectors – Bring external data into Microsoft 365 and Copilot Studio.
  • Power Platform Connectors – Use with Power Automate to bridge email data to Copilot Studio.

πŸ’‘ Use Cases

  • Email Summarization – Copilot extracts key insights from emails.
  • Meeting Preparation – Generate agendas from calendar and email threads.
  • Task Management – Create tasks based on email content.

🧩 How to Call Microsoft Graph API from Copilot Agent 

This guide explains how to:

  • Use Power Automate to trigger Graph API calls.
  • Set up OAuth 2.0 authentication using Azure AD.
  • Send email data to Copilot Studio agents via HTTP actions.


https://learn.microsoft.com/en-us/answers/questions/2156030/integrating-outlook-as-a-knowledge-base-for-copilo

Wednesday, July 16, 2025

πŸ€– πŸ“„ How to Create a Document-insight Agent in Microsoft 365 Copilot

πŸ“šMicrosoft 365 Copilot enables you to build custom agents that interact with users, retrieve knowledge from organizational data, and answer context-based questions. This guide covers how to create an agent that reads specified documents and provides accurate answers.


1. Overview

The agent will:

  • Read and process uploaded or linked documents.

  • Answer user queries strictly based on document content.

  • Support knowledge integration from SharePoint, OneDrive, and Teams.


2. Steps to Create an Agent in Microsoft 365 Copilot

Step 1: Open Microsoft 365 Copilot

  • Launch the Microsoft 365 Copilot app.


Step 2: Access Agents

  • Navigate to:
    Chat → Agents

  • Click Create Agent or go to All Agents to explore templates.


Step 3: Choose Creation Method

  • From Scratch:
    Use Agent Builder with natural language instructions.

  • Template:
    Select a pre-built option from Agent Store.


Step 4: Configure the Agent

  • Description:
    Define the purpose of the agent.
    Example:
    “This agent reads and indexes documents from connected knowledge sources and provides context-based answers.”

  • Add Instructions:
    Use this prompt for agent behaviour:

Agent Instruction Prompt

You are an intelligent Copilot agent designed to assist users by reading and understanding the content of specified documents and answering questions based on those documents.
Your objectives: 1. Read and process the provided documents thoroughly. 2. Break down documents into manageable sections for better retrieval. 3. When a user asks a question, search the documents to find the most relevant information. 4. Provide accurate, concise, and context-aware answers strictly based on the documents. 5. If the answer is not in the documents, respond with: "I could not find this information in the provided documents." 6. Never fabricate or use external knowledge. 7. Maintain a clear and professional tone. 8. Summarize for complex queries unless full details are requested. Capabilities: - Extract and parse text from PDFs, Word, and text documents. - Index and retrieve answers efficiently from multiple sources.

  • Connect Knowledge Sources:
    Examples:

    • SharePoint libraries (Contracts, Policies)

    • OneDrive folders

    • Teams document repositories


Step 5: Test and Deploy πŸ› ️

  • Test the agent in Preview Mode.

  • Verify:

    • It uses only allowed sources.

    • It returns correct and concise answers.

  • Publish and share with your organization.


3. Visual Flow Diagram

Below is a representation of the Copilot Agent creation process:


4. Example Copilot Configuration

Description:
"Document Search Agent – Reads and indexes organizational documents to provide accurate answers based on their content."

Instructions:

You are a Copilot agent that answers questions only using provided documents.
If the answer is not present, say: "The information is not available in the provided documents." Always provide concise responses and mention the source document name where possible. Do not use external information.

Knowledge Sources:

  • SharePoint Site: Company Contracts

  • OneDrive Folder: HR Policies

  • Teams Channel Files: Project Documentation


Best Practices

  • Write explicit guardrails to prevent hallucination.

  • Ensure data access is restricted to approved sources.

  • Monitor agent responses and refine as needed.


Reference: Copilot Developer Camp


Monday, July 7, 2025

πŸš€ Deploying a Python Flask Application on Windows Server 2022 Datacenter

 

Deploying a Python Flask Application on Windows Server 2022 Datacenter

Deploying a Python Flask application on a Windows Server 2022 Datacenter environment can be a powerful way to host scalable web services within enterprise infrastructure. While Flask is lightweight and flexible, Windows Server offers robust security, centralized management, and compatibility with enterprise tools. However, successful deployment requires careful planning and execution.

Why Flask on Windows Server?

Flask is a micro web framework for Python that allows developers to build web applications quickly and with minimal overhead. Windows Server 2022 Datacenter, on the other hand, is designed for high-performance workloads, virtualization, and hybrid cloud integration. Combining the two can be ideal for internal tools, dashboards, APIs, or even customer-facing applications.


Key Deployment Considerations

Before deploying, it's essential to understand the components involved:

  • Python Environment: Ensure Python (e.g., version 3.11) is installed and configured properly.
  • Web Server: Choose between IIS (Internet Information Services) with FastCGI or a standalone WSGI server like Waitress.
  • Database: Configure and secure access to the database (e.g., PostgreSQL, MySQL, or SQLite).
  • Security: Apply SSL certificates, configure firewalls, and set up user permissions.
  • Monitoring & Logging: Implement logging mechanisms and performance monitoring tools.

Deployment Checklist

A deployment checklist helps streamline the process and avoid common pitfalls. Here's a breakdown:

✅ Pre-Deployment

  • Verify server access and administrative privileges.
  • Install Python and set up a virtual environment.
  • Open necessary ports (e.g., 80 for HTTP, 443 for HTTPS).
  • Install required Python packages using requirements.txt.
  • Prepare the database and test connectivity.
  • Ensure SSL certificates are available if HTTPS is needed.

πŸš€ Deployment

  • Upload application files to the server.
  • Activate the virtual environment and install dependencies.
  • Configure environment variables and secrets.
  • Set up the web server to serve the Flask app.
  • Apply SSL and configure reverse proxy if needed.
  • Test endpoints and application behavior.

πŸ”„ Post-Deployment

  • Monitor logs for errors and performance issues.
  • Validate database operations and backups.
  • Set up alerts and uptime monitoring.
  • Document the deployment process for future reference.
  • Notify stakeholders of successful deployment.

Conclusion

Deploying a Flask application on Windows Server 2022 Datacenter is a strategic choice for organizations seeking performance, control, and integration with existing infrastructure. By following a structured deployment checklist, teams can ensure a smooth rollout, minimize downtime, and maintain a secure and scalable environment.

Friday, July 4, 2025

🧠 Steps to Build a POC Locally with Windows AI Foundry

 πŸš€ Introduction

What is Foundry Local?

Foundry Local brings the power and trust of Azure AI Foundry to your device. It includes everything you need to run AI apps locally.

As AI adoption grows, developers are increasingly looking for ways to build intelligent applications that run efficiently on local machines. Windows AI Foundry, especially its Foundry Local feature, enables developers to create AI-powered Proof of Concepts (POCs) without relying on cloud infrastructure or Azure subscriptions. This article walks you through the step-by-step process of building a POC locally using Windows AI Foundry—and addresses common challenges along the way.


🧰 Prerequisites

Before you begin, ensure you have:

  • A Windows 11 machine with sufficient CPU/GPU/NPU resources.
  • Internet access (for initial setup and model downloads).
  • Familiarity with command-line tools and programming (Python, C#, or JavaScript).
  • Installed tools:
    • Foundry CLI
    • Foundry Local SDK
    • ONNX Runtime (optional)

πŸ› ️ Step-by-Step Guide

Step 1: Install Foundry CLI and SDK

Install the Foundry CLI:

winget install FoundryCLI

Install the SDK for your preferred language (e.g., Python):

pip install foundry-sdk

Step 2: Choose and Download a Model

Foundry Local supports several optimized models:

  • Phi-4 Reasoning
  • Mistral
  • Qwen 2.5 Instruct
  • DeepSeek R1

Download a model:

foundry models download phi-4

Step 3: Run Inference Locally

Run inference directly from the CLI:

foundry model run phi-4 --input "Explain quantum computing in simple terms."

Step 4: Build Your Application

Example using Python:

from foundry import FoundryModel

model = FoundryModel("phi-4")

response = model.run("What is the capital of Karnataka?")

print(response)

Step 5: Test and Iterate

Test your application with different inputs. Monitor performance and refine prompts or model selection as needed.


⚠️ Common Challenges and Solutions

If winget is blocked by organization policies, try this:

πŸ” Manual Installation

  1. Visit the official Foundry Local page:
    Windows AI Foundry Dev Blog
  2. Download the Foundry Local Installer (MSI or EXE).
  3. Run the installer as Administrator.
  4. Follow the prompts to complete installation.

πŸ§ͺ Verify Installation

Open Command Prompt and run:

foundry --version

πŸ€– List Available Models

foundry model list

This will show models like:

  • phi-3.5-mini
  • phi-4
  • mistral-7b
  • qwen-2.5
  • deepseek-r1

ChallengeDescriptionSolution
Hardware LimitationsSome models require significant memory or GPU/NPU support.Use lightweight models like Phi-4 or quantized ONNX versions.
Model CompatibilityNot all models are optimized for local inference.Stick to models officially supported by Foundry Local or convert models to ONNX format.
Latency IssuesInference may be slow on older machines.Use smaller models or optimize with ONNX Runtime and hardware acceleration.
Limited DocumentationFoundry Local is relatively new, so community support is still growing.Refer to the official Foundry blog and GitHub issues for guidance.
Integration ComplexityIntegrating AI into existing apps can be tricky.Use SDKs and sample code provided by Microsoft to speed up development.

πŸ“ˆ Use Cases for Local POCs

  • Customer Support Bots
  • Offline Educational Tools
  • Secure Enterprise Assistants
  • Healthcare Decision Support (with local data)


To run Windows AI Foundry (Foundry Local) effectively on your Tata Communications machine, here are the minimum and recommended hardware requirements for CPU, GPU, and NPU:

✅ Minimum Requirements

These are sufficient for basic model inference (e.g., small models like Phi-3.5-mini):

  • Operating System: Windows 10 (x64), Windows 11 (x64/ARM), or Windows Server 2025
  • CPU: Any modern x64 processor (Intel i5/Ryzen 5 or better)
  • RAM: 8 GB
  • Disk Space: 3 GB free
  • Acceleration (Optional): None required—CPU-only inference is supported 

🌟 Recommended Requirements

For smoother performance and support for larger models:

  • CPU: Intel i7/Ryzen 7 or better
  • RAM: 16 GB or more
  • Disk Space: 15–20 GB free (for model caching)
  • GPU:
    • NVIDIA: RTX 2000 series or newer
    • AMD: Radeon 6000 series or newer
  • NPU:
    • Qualcomm Snapdragon X Elite (with 8 GB or more VRAM)
    • Apple Silicon (for macOS users)

Foundry Local automatically detects your hardware and downloads the most optimized model variant (CPU, GPU, or NPU) accordingly

🧩 Conclusion

Windows AI Foundry’s local capabilities make it easier than ever to build powerful, privacy-preserving AI applications without cloud dependencies. By understanding the setup process and proactively addressing common challenges, developers can rapidly prototype and deploy intelligent solutions on Windows devices.

https://github.com/microsoft/Foundry-Local/blob/main/docs/README.md

https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started

https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/reference/reference-cli


Link: Get started with Azure AI Foundry
https://learn.microsoft.com/en-us/azure/ai-foundry/quickstarts/get-started-code?tabs=azure-ai-foundry&pivots=fdp-project

🧠 Sample Use Cases for Foundry Local + Phi-4

1. Offline Customer Support Assistant

  • Scenario: A local chatbot that helps employees or customers with FAQs.
  • Why Local?: No internet dependency; ideal for secure environments.
  • Example Prompt: “How do I reset my company email password?”

2. Internal Knowledge Search Tool

  • Scenario: Search and summarize internal documents or policies.
  • Why Local?: Keeps sensitive data on-device.
  • Example Prompt: “Summarize the leave policy from this PDF.”

3. Educational Tutor App

  • Scenario: A desktop app that helps students learn topics interactively.
  • Why Local?: Works in low-connectivity areas like rural schools.
  • Example Prompt: “Explain Newton’s laws with examples.”

4. Healthcare Assistant (Private Clinics)

  • Scenario: Helps doctors or staff with medical terminology or patient instructions.
  • Why Local?: Ensures patient data privacy.
  • Example Prompt: “What are the symptoms of dengue?”

5. Coding Helper for Developers

  • Scenario: Local assistant that helps write or debug code.
  • Why Local?: No need to send code snippets to the cloud.
  • Example Prompt: “Write a Python function to sort a list of dictionaries by age.”

6. Legal Document Analyzer

  • Scenario: Summarizes or explains legal clauses from contracts.
  • Why Local?: Keeps sensitive legal data secure.
  • Example Prompt: “Summarize clause 4.2 of this agreement.”

7. Multilingual Translator

  • Scenario: Translate local language documents or messages.
  • Why Local?: Works offline and avoids sending data to external servers.
  • Example Prompt: “Translate this Kannada sentence to English.”

How to Deploy Your HTML Website on a Linux Server (Apache + SFTP)

Launching a simple HTML website on your own Linux server is easier than you might think. Whether you're sharing a static landing page or...