Privacy notice

How the portal handles data.

Last updated: 2026-01-16
This portal is designed to collect only the minimum inputs required to produce an assessment run and its output files.
Back to portal
Quick links
FAQ
Common questions
Support
Ask Fop
RFS
Request a scenario
Security
Report concerns
Privacy
Data handling
Terms
Usage conditions
Accessibility
Feedback channel
Data you provide
  • Scenario selection and assessment parameters (e.g., target domain).
  • Contact details when required to generate an assessment.
  • Optional notes you enter (avoid secrets).
Generated data
  • Run status/progress metadata.
  • Generated report/result files made available for download.
Evidence uploads
When enabled for a scenario, evidence uploads are performed directly from your browser to your designated storage workspace. The portal UI does not store uploaded evidence files in the application backend.
Retention
Retention depends on the deployment configuration and engagement requirements. If you need explicit retention guarantees, request them through your engagement contact.
Contact
For privacy questions or data requests, use the support contact referenced by your engagement.
2026 LuiT || ₷©®•Modern Support Consult•Hosted on Microsoft Azure + GitHub Enterprise
FAQPrivacyTermsAccessibilitySecuritySupportTRIX
Support details(version & routing)
APImsc-portal-api-d7g5fudfa7eufufa.westeurope-01.azurewebsites.net
Build6d394df1@20260501T180220Z
Billing & licensing(public summary)
This portal provides structured assessments and actionable guidance. Results depend on what is observable and what changes are implemented in your environment.
Results depend on inputs
  • Online scenarios measure available signals (for example: public DNS records) and report what is observable at the time of the run.
  • Guidance describes recommended steps; applying them may require configuration changes, permissions, vendor behavior, and timing.
  • Outcomes are influenced by multiple moving parts; we focus on measurable improvements, defensible choices, and clear evidence you can act on.
How costs are calculated (customer view)
  • Online run pricing: charged per paid online run. The price shown at checkout for the selected scenario is the source of truth.
  • Customization-supported scenarios: scope and pricing are confirmed with you (in writing) before work begins, and only change if scope assumptions materially change.
  • Performance-aligned pricing (larger scenarios): for eligible, explicitly agreed engagements, a portion of pricing can be aligned to measured outcomes. Where used, the percentage applied by MSC/LuiT is fixed (currently 3.69%) and is applied only after the model is understood, resonates, and is explicitly approved.
  • Billing cadence & crediting: billing can be issued monthly for the agreed delivery window, followed by a success reconciliation/credit once outcomes are verified against the agreed deliverables.
  • Taxes (such as VAT) may apply depending on your location and billing details. Invoice/checkout totals are the source of truth.
Fairness & exceptions
Reconciliation is based on the agreed deliverables and the available inputs. If customer-side deliverables (access, approvals, change windows) are delayed after being identified as blockers, we may pause measurement or re-baseline the delivery window. We prefer fast, transparent review and adjust when the facts support it.
License scope
Some scenarios and deeper artifacts are intentionally restricted in public mode. For customization-supported engagements, licensing/procurement and exact deliverables are confirmed before work begins. If you need enterprise-wide terms or invoicing, use Support.
Why Modern Support Consult
Microsoft-first, on-prem friendly
MSC is built around Microsoft infrastructure and cloud operations, with on-premises realities treated as first-class. Built by an Exchange-focused specialist (~30 years IT; ~20 years Microsoft messaging & integration).
Curated, regulated guidance
Not random tips: curated and cross-compared to reduce noise and keep actions defensible.
Benchmarking + checklist outputs
Scenarios benchmark posture and produce checklist-style remediation aligned with mainstream security/ops methods.
Quadral method (Codex / Metaflow-ready)
Evidence → traceable outcomes: signals, checks, decisions, guidance, exceptions. Designed for consistent benchmarking and audit-friendly reasoning.