Unanswered Question
1) Any advice on using AI for research, specifically for finding ‘needle in a haystack’ information across both proprietary and public sources?
For ‘needle in a haystack’ research, the key is grounding the AI in trusted data first - your secured, governed company information - so it’s starting from facts you already believe in. You can let it look at public sources or specific websites, but those results still need to be validated. Responsible AI isn’t just about preventing hallucinations; it’s about checking the answers, understanding the sources, and staying in control. Modern AI uses vector indexing to search by meaning, not just keywords, which is why it can surface insights traditional search would miss, but it still works best as a smart research assistant, not the final decision-maker. From our AI Webinar, this comes back to the slide I presented on directly about their most valuable asset and how it needs to be secured: Your Data.
2) Can you address the environmental impacts of AI and ways we can be mindful of that when utilizing this tool?
It’s still early to make definitive claims about the environmental impact of AI, but like any powerful digital tool, it does require energy. AI runs in data centers, and those facilities are electricity- and water-intensive—similar to cloud computing, streaming services, or large enterprise systems.
From personal experience, I live in Pennsylvania, where there’s been discussion about placing new data centers nearby. Some communities raise concerns around higher electricity costs over time and potential impacts on groundwater, especially in areas that rely on private wells. Those are real considerations, but they’re also part of broader infrastructure planning conversations - not unique to AI.
The important thing is that responsible AI isn’t just about how models behave, but how they’re deployed using efficient infrastructure, thoughtful siting, and governance that balances innovation with sustainability. Like most technologies, the impact depends less on the tool itself and more on how intentionally we use it.
3) Do you have AI-specific recommendations/examples for an architecture firm?
For architecture firms, there usually isn’t a single ‘architecture-specific AI model,’ and that’s actually okay. I searched Azure's AI Foundry for one quickly but nothing generally available, but I then pivoted to how others in the same business are using it. The real value comes from how you use AI, not the model itself. Most firms see the best results when AI acts as a helper layered on top of their existing systems and data.
Common use cases include using AI as a project assistant to track milestones and summarize meetings, a document librarian that can quickly find drawings, specs, or RFIs, and even a zoning or code research assistant grounded in local public regulations. We also see firms using AI to look across past projects - identifying missing or weak documentation in prior bids, spotting patterns in wins and losses, and reusing proven templates and approaches.
The current best practice is to connect a general-purpose AI model to the firm’s secured, governed data - CAD files, project documents, schedules, and reference materials rather than trying to ‘train’ a brand-new model that may lead to assisting the actual architecture and the legal implications. That way, AI understands YOUR firm’s work and language while staying within legal, security, and compliance boundaries. Think of it less as replacing expertise and more as giving every architect a very fast, very organized assistant.
Overview
December 11th
AI is entering a critical moment where organizations are demanding proof of real operational value. This session gives you a clear look into today’s AI hype cycle, common implementation failures, and real automation examples. You’ll get a grounded look at what it actually takes to make AI work in a business environment.
We’ll show how organizations move from early adoption to meaningful automation by focusing on documented workflows, measurable outcomes, and the right sequence of steps. Rather than jumping straight into advanced use cases, you’ll see how a disciplined Crawl → Walk → Run approach creates faster wins, safer deployments, and long-term scalability. The goal is simple: help you understand what “good AI implementation” looks like so you can accelerate it inside your own organization.
Agenda:
- The Tractor Story: An analogy that reframes today’s AI moment and why major technology shifts always begin with uncertainty.
- The AI Reality Check: A grounded look at where AI sits on the hype cycle and why skepticism is growing.
- Why Implementations Fail: The operational gaps such as undocumented workflows, unclear processes, and weak implementation discipline that stall AI progress.
- Real Automation in Action: A case study showing how AI removed significant manual work and delivered measurable ROI.
- Getting Started the Right Way: Practical guidance on how organizations can take their first steps with confidence.
- The Crawl → Walk → Run Framework: A structured model for safe deployment, workflow integration, and scaling into intelligent automation.
- Q&A + Panel Discussion: A live conversation with Endsight’s leadership and engineering team.
Do you have questions?
Fill out the form below and we will get Michelle or someone best suited to answer your questions.
AI Resources
- AI in the Wrong Hands - Cybersecurity Risks & Threats Video: https://www.endsight.net/security-shorts/how-hackers-use-ai
- Scenario Library: https://adoption.microsoft.com/en-us/copilot-scenario-library/
- CoPilot Lab: https://copilot.cloud.microsoft/en-US/prompts
- Setup Guide: https://setup.microsoft.com/copilot/setup-guide
- Success Kit: https://adoption.microsoft.com/en-us/copilot/success-kit/
- Register for future Devlopment Webinars: https://get.endsight.net/development-webinar
- Register for Office Hours: https://get.endsight.net/cybersecurity/office/hours
- Register for Monthly Cybersecurity Training: https://get.endsight.net/monthly-cybersecurity-fundamentals-training
- Endsight's Video Library: https://get.endsight.net/endsight-video-libary
- 365 Tip of the Week Newsletter: https://get.endsight.net/microsoft-tip-of-the-week
- Mike's LGC Presentation: https://www.youtube.com/watch?v=avoErz6_z4w
- Previous Webinar: https://www.endsight.net/dev-agents-in-action
Speakers
Michelle Brezenski
Manager, Development @ Endsight
Eugene Motisko
Power Platform Developer @ Endsight
With nearly 30 years of IT experience, Eugene leads Endsight’s Power Platform practice, specializing in Power Apps, multi-tier approval automation workflows, AI-driven solutions and infrastructure automation. He conducts consultations with Endsight’s MSP clients to pinpoint business pain points, design tailored solutions, capture detailed requirements, and deliver precise cost estimations. Eugene holds multiple Microsoft certifications, including the Power Platform Solution Architect Expert (PL-600).
Mike Chaput
Founder & CEO @ Endsight
With over 20 years in the IT Managed Services sector, Mike Chaput co-founded Endsight in 2004. Mike’s industry involvement includes memberships in TruMethods, Venture Tech, and True Profit Group. Mike is also on the CEO advisory council of Kaseya and sits on a private equity board invested in the MSP sector. Mike’s academic credentials include a B.S. in Computer Engineering from Michigan State and an MBA from UC Berkeley's Haas School.
Brian Tirado
Director of AI & Automation @ Endsight
With over 20 years of IT experience, Brian is a seasoned professional who began his career at Occidental Technical Group in 2002 and joined Endsight in 2017 through its acquisition. Since then, he has held various leadership roles, starting as RC Manager, advancing to Director of Pod Operations, and now serving as Director of AI and Automation. Brian has a Bachelor’s in Computer Science from Pacific Union College.