Graph API Week Day 7 - Auditing Microsoft 365 Like a Hacker (Using Graph, Legally)

  • avatar
    Admin Content
  • Dec 04, 2025

  • 30

Graph API Week Day 7 - Auditing Microsoft 365 Like a Hacker (Using Graph, Legally)

Auditing Microsoft 365 Like a Hacker (Using Graph, Legally)

Short version: think like an attacker to predict their moves, then use Microsoft Graph (and Microsoft Purview / Security APIs) to discover, investigate, and respond — all with least-privilege, auditable, and legal tooling. This article walks you through the mindset, the Graph endpoints that matter, safe architecture & permissions, hunting patterns, automation ideas, and a compact operational checklist with sample read-only queries you can run from Graph Explorer or the Microsoft Graph PowerShell/SDK. Ready? Let’s go.


Adopt the attacker’s mindset — legally and defensively

To hunt effectively you must think like someone trying to bypass controls, not like someone doing a compliance checkbox exercise. That means enumerating what an adversary would try first: enumerate accounts and last-used devices, check for privilege escalation events, review risky sign-ins, and find service principals or apps with excessive permissions. Framing your investigation this way helps you prioritize which logs and alerts to collect, and which anomalies are likely early indicators of compromise.

Adopting this mindset does not mean performing offensive actions against production tenants or external targets. Always run your hunts against tenants where you have explicit authorization, document authority, and follow organizational change-control and privacy rules. Keep a defensible audit trail of your own actions: who ran what query, when, and why. That trail is essential in internal reviews and — if needed — in third-party audits or incident postmortems.

Translate the attacker steps into data-oriented questions: “Which users added a global admin in the last 30 days?” becomes a directory-audit query. “Which client IPs had many failed sign-ins then a successful one?” becomes a sign-in timeline. Turning motivations into queries keeps your work measurable and repeatable. Finally, agree on acceptable risk and escalation thresholds with stakeholders before you run any aggressive discovery—this prevents false-positive panic and preserves trust.

Documenting scope and approvals also protects you legally. If you’re a consultant, have a signed scope-of-work and a clear non-disclosure/audit clause; if you’re internal, get written approval from the CISO or delegated authority before running tenant-wide automated hunts. This way you stay on the right side of law and policy while still behaving like a hacker when you analyze the environment.


The Graph endpoints that actually matter (and what they reveal)

Three Microsoft Graph areas will be your daily bread: directory audit logs, sign-in logs, and the Security / Alerts surfaces. Directory audit logs show who changed directory objects (role assignments, app registrations, group membership, PIM activity) and are essential for detecting privilege changes. The directoryAudits endpoint lets you list and filter these events programmatically so you can detect, for example, new privileged group members or delegated consent grants.

Sign-in logs contain authentication activity — successful, failed, conditional access evaluation, and device/auth method details. These logs are where you spot brute-force, credential stuffing, impossible travel, or new device enrollment patterns. The signIns API surfaces entries including applied conditional access results and risk details, which are crucial when you trace a suspicious session.

The Microsoft Graph Security API gives you a normalized view of alerts, incidents, and security provider data (both Microsoft and partner sources). Use it to fetch correlated alerts, pull alert evidence, and automate enrichment (for example: resolve which user and IP are related to an alert and then kick off a playbook).

There’s also the broader Audit Log / Purview Audit functionality — an audit search API surface that captures a wider set of Microsoft 365 service activity (Exchange, SharePoint, Teams, etc.). For cross-product forensic hunts (for example, “who shared a doc externally after a suspicious login?”), the Purview audit search or audit log query endpoints are the right tools and often the only source that contains service-specific events.


Safe architecture: permissions, app types, and auditing readiness

Principle of least privilege is non-negotiable. Design your audit tooling to request precisely the permissions it needs (for example AuditLog.Read.All or service-scoped read permissions) and nothing more. Where possible, prefer scoped delegated permissions with admin consent only for the roles required — avoid granting tenant-wide “read all” unless absolutely necessary.

Choose the app model carefully. App-only (client credentials) is ideal for automated, non-interactive collection (a SIEM or scheduled collector). App-only requires admin consent and must be treated like a service account — protect its credentials or certificate and rotate them. Delegated access (on-behalf-of a user) is useful for interactive tooling or when you want actions tied to a human identity. Always log which app or user ran the query and why, and keep these logs immutable where possible.

Enable and verify auditing across services before you hunt. Some tenants require explicitly turning the audit log on (Purview auditing) or setting up diagnostic settings to send Graph activity to Log Analytics / Storage / Event Hubs. Confirm retention policies and export destinations — if logs are expired or not being sent to your SIEM, your hunt will be blind by design.

Make consent and admin controls part of your governance: configure user consent policies, use app permission review, and schedule regular app permission audits so you don’t have stale service principals with excessive rights. Combine these governance steps with automated reporting: weekly checks for new app registrations, monthly listings of apps with high-level privileges, and quarterly third-party app permission reviews.


Hunting patterns, queries, and safe examples you can run today

Translate attacker behaviors into reusable queries. Examples of high-value hunts include: new privileged group additions in the last 24 hours; service principals created recently with an owner outside HR; sign-ins from anonymized IP ranges that later access sensitive mailboxes; and conditional access policy changes. Below are safe, read-only sample Graph queries (suitable for Graph Explorer or script).

Sample queries:

Get latest directory audit events (who changed roles): 

GET https://graph.microsoft.com/v1.0/auditLogs/directoryAudits?$top=50&$orderby=activityDateTime desc This returns recent directory changes to triage privilege changes.
Pull recent sign-in anomalies with conditional access details: 

GET https://graph.microsoft.com/v1.0/auditLogs/signIns?$filter=createdDateTime ge 2025-09-01T00:00:00Z&$top=100 Inspect conditionalAccessStatus, riskDetail, and ipAddress fields for patterns.
List Security alerts for automated triage: 
GET https://graph.microsoft.com/v1.0/security/alerts?$top=50 

Use returned alerts to enrich incidents in your SOAR.

When building hunts, favor observable enrichment over immediate remediation. Enrich an IP address with geo, ASN, and known-bad lists; group user behavior across a 30-day baseline; and compute changes (for example, sudden inbox forwarding or new mailbox rules after a risky sign-in). These enrichments turn raw log noise into prioritized incidents.

Make investigations reproducible: encapsulate queries into scripts or notebooks, parameterize the time windows, and version-control your hunting playbooks. When you find a signal, save the exact query that produced it and record why it mattered — that makes after-action reviews and mitigation measurable.


Automation, alerting, and response — from Graph to action

Graph is great for collection and investigation, but you want outcomes: alerts, containment, and remediation. Integrate Graph outputs into your SIEM (native connectors or custom forwarders), then use correlation rules or models to generate prioritized alerts. From there, use runbooks (Azure Logic Apps, Microsoft Sentinel playbooks, or your SOAR) to automate routine response steps — for example, disable a user, block an IP, revoke refresh tokens, or force a password reset — but only after human verification for high-impact actions.

Use the Security API to pull enriched alerts and evidence into your automation engine, and make sure your playbooks include safety checks and approval gates for destructive actions. The Security API supports getting alert details and related entities so your playbooks can make context-aware decisions instead of acting blindly.

For containment, prefer reversible actions that preserve evidence: disable sign-in, remove device access, or suspend sessions before deleting objects. Always capture forensic snapshots (full audit events, mailbox actions, SharePoint access logs) and place them into immutable storage for later analysis and potential legal needs.

Close the loop with post-incident reviews: document detection timelines, which Graph queries found the activity, time-to-detect, and what was automated. Build new monitors from lessons learned and push runbook improvements back into your automation pipeline.


Practical operational checklist (compact and pasteable)

 

  1. Permissions & Governance Confirm your collector app has the minimum permissions required (for example AuditLog.Read.All) and that admin consent is recorded. Review app registrations monthly and remove unused or over-privileged apps.
  2. Audit readiness Verify Purview audit is enabled and diagnostic settings forward logs to durable storage (Log Analytics, Storage, Event Hubs). Confirm retention meets legal and investigative needs.
  3. Baseline & Alerts Establish baselines for normal sign-in and admin-change rates. Create alerts for deviations such as sudden admin role changes or new app owners outside expected groups.
  4. Hunting Playbooks Implement scheduled hunts for high-value items: privilege additions, new service principals, anomalous MFA bypass attempts, high-risk sign-ins, and external sharing after high-risk sign-ins. Store the exact queries you run and version-control them.
  5. Automation & SOAR Integrate Graph Security outputs into your SOAR. Keep human-in-the-loop for high-risk remediation and ensure playbooks capture full audit context before taking irreversible actions.

 

Article content
 

Final notes — ethics, limits, and next steps

“Like a hacker” here is a heuristic — it means adopt an adversary’s model to surface weak points and misconfigurations, not to attack tenants. Your work must be authorized, auditable, and reversible. Respect data privacy rules (PII handling, data minimization) while hunting, and avoid actions that could disrupt production or users without prior sign-off.

Source: Graph API Week Day 7 - Auditing Microsoft 365 Like a Hacker (Using Graph, Legally)

Get New Internship Notification!

Subscribe & get all related jobs notification.