Metadata-Version: 2.1
Name: guide-framework
Version: 1.0.7
Summary: GUIDE Framework – Ethical AI Governance, Explainability (XAI), Transparency & Responsible AI Auditing.
Home-page: https://github.com/yourusername/guide-framework
Author: Kamal Master
Author-email: ktabine@gmail.com
Keywords: ethical-ai,ai-governance,xai,explainability,transparency,responsible-ai,fairness,bias-audit,equity,governance,guide-framework
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Information Technology
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: openai
Requires-Dist: requests
Requires-Dist: numpy
Requires-Dist: pandas
Requires-Dist: scikit-learn

# GUIDE Framework – Ethical AI, Governance & Explainability (XAI)

`guide-framework` is a modern Ethical AI toolkit that helps organizations **audit**, **explain**, and **govern** AI systems using the G.U.I.D.E. principles:

## 🌟 GUIDE AI ETHICS FRAMEWORK  
**G – Governance**  
**U – Universal Design**  
**I – Identification**  
**D – Dignity**  
**E – Equity**

This framework ensures:
- Fairness  
- Transparency  
- Explainability  
- Accessibility (508/WCAG)  
- Responsible AI decision-making  
- Ethical auditing and governance  

It includes both **authoritative Ethical AI auditing** and **XAI-based explainability tools**.

---

# 🚀 INSTALLATION

```bash
pip install guide-framework
```

Requires Python **3.8+**.

If your audit requires data utilities:

```bash
pip install pandas scikit-learn numpy
```

---

# 🧠 QUICK START EXAMPLES

## ✅ 1. Ethical AI Audit

```python
from guide_framework import EthicalAIAudit

audit = EthicalAIAudit("LoanModel")

audit.transparency_check({
    "model_name": "LoanModel",
    "version": "1.0",
    "training_data": "internal",
    "documentation": True
})

audit.fairness_check("Evaluate this loan application.")
audit.safety_check("This system may present danger.")
audit.accessibility_check("This explanation is readable and clear.")

report = audit.finalize_report()
print(report)
```

**Output example:**

```json
{
  "model": "LoanModel",
  "results": {
    "transparency": {"status": "pass"},
    "fairness": {"status": "pass"},
    "safety": {"status": "fail"},
    "accessibility": {"status": "pass"}
  },
  "score": "3/4",
  "approved": true
}
```

---

## ✅ 2. Explainability (XAI)

```python
from guide_framework import ExplainabilityGuide

xai = ExplainabilityGuide(api_key="YOUR_OPENAI_KEY")

explanation = xai.explain_decision(
    "The system rejected the loan.",
    context="Applicant missing income documentation"
)

print(explanation)
```

---

## ✅ 3. Transparency Summary

```python
metadata = {
    "model_name": "LoanModel",
    "purpose": "Predict risk",
    "training_data": "public datasets",
    "version": "1.0.0",
    "risks": "May underperform for seniors"
}

print(xai.transparency_summary(metadata))
```

---

# 🔍 MODULE OVERVIEW

```
guide_framework/
  ├─ auditor.py           # Ethical audit (transparency, safety, fairness, 508)
  ├─ xai_meaning.py       # XAI explainability & transparency
  ├─ governance.py        # G – Governance
  ├─ understanding.py     # U – Understanding
  ├─ integrity.py         # I – Identification / Integrity
  ├─ disclosure.py        # D – Disclosure
  ├─ equity.py            # E – Equity
  ├─ situation_templates.py
  ├─ knowledge_base.py
  ├─ guide_simple.py
  ├─ guide.py
  └─ other support files…
```

---

# 📋 GUIDE ETHICAL CHECKLIST

### G – Governance  
☑ Oversight structure  
☑ Documentation  
☑ Model lifecycle control  

### U – Universal Design  
☑ Inclusive testing  
☑ Diverse groups validated  

### I – Identification  
☑ Clearly identify all AI interactions  

### D – Dignity  
☑ No harm / disrespect  
☑ Safe and humane AI outputs  

### E – Equity  
☑ Fairness across demographic groups  
☑ Bias detection  

---

# 🧩 FAIRNESS RECOMMENDATIONS (Auto-generated examples)

1. Remove direct sensitive attributes  
2. Re-check group-level disparities every quarter  
3. Add human review for borderline cases  
4. Provide model explanations to affected users  
5. Validate performance across age, gender, culture  
6. Establish ethical escalation procedures  

---

# 🔐 API KEYS & SECURITY

Your OpenAI key is **NOT stored in the package**.

Provide it safely like this:

```python
xai = ExplainabilityGuide(api_key="sk-xxxx")
```

OR set:

```bash
export OPENAI_API_KEY="sk-xxxx"
```

The library will automatically load it.

---

# ⚠ TROUBLESHOOTING

```bash
pip install --upgrade guide-framework
```

If build fails:

```bash
Remove-Item -Recurse -Force dist, build, *.egg-info
python setup.py sdist bdist_wheel
```

---

# 📜 LICENSE  
MIT License

---

# 👤 AUTHOR  
**Kamal Master**  
Email: **ktabine@gmail.com**

---

# ✔ GUIDE AUDIT COMPLETE  
**Build AI that earns trust through ethical excellence.**
