File size: 4,986 Bytes
5fed0fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
# Research Problems

Real-world systems challenges requiring domain expertise in GPU computing, distributed systems, ML pipelines, databases, and security.

## Basic Usage

```bash

# List all problems

frontier-eval --list



# Evaluate a solution (requires Docker)

frontier-eval flash_attn <your_solution.py>



# Evaluate multiple problems

frontier-eval --problems flash_attn,cross_entropy <your_solution.py>

```

## Cloud Evaluation with SkyPilot

Some problems require GPUs or specific hardware. Use [SkyPilot](https://skypilot.readthedocs.io/) to run evaluations on cloud VMs.

**Setup:**

```bash

sky check

```

See [SkyPilot docs](https://skypilot.readthedocs.io/en/latest/getting-started/installation.html) for cloud credential setup.

**Usage:**

```bash

frontier-eval flash_attn <your_solution.py> --skypilot

```

## Batch Evaluation

For evaluating multiple solutions at once, create a pairs file mapping solutions to problems:

```

# pairs.txt format: solution_path:problem_id

solutions/my_flash_attn_v1.py:flash_attn

solutions/my_flash_attn_v2.py:flash_attn

solutions/my_cross_entropy.py:cross_entropy

```

Then run:

```bash

# Evaluate all pairs

frontier-eval batch --pairs-file pairs.txt



# Resume interrupted evaluation

frontier-eval batch --pairs-file pairs.txt --resume



# Check status

frontier-eval batch --status --results-dir results/batch

```

## Python API

```python

from frontier_cs import FrontierCSEvaluator



evaluator = FrontierCSEvaluator()



# Single problem

result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code)

print(f"Score: {result.score}")



# With SkyPilot

result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code,

                           backend="skypilot")



# Batch evaluation

results = evaluator.evaluate_batch("research",

                                  problem_ids=["flash_attn", "cross_entropy"],

                                  code=my_code)

```

## Problem Structure

Each problem is in its own directory under `research/problems/`:

```

research/problems/

β”œβ”€β”€ flash_attn/           # Single problem

β”‚   β”œβ”€β”€ config.yaml

β”‚   β”œβ”€β”€ readme

β”‚   β”œβ”€β”€ evaluator.py

β”‚   └── resources/

β”œβ”€β”€ gemm_optimization/    # Problem with variants

β”‚   β”œβ”€β”€ squares/

β”‚   β”œβ”€β”€ rectangles/

β”‚   └── ...

└── ...

```

### File Reference

| File | Purpose |
|------|---------|
| `config.yaml` | Runtime config (Docker image, GPU requirement, timeout) |
| `readme` | Problem description, API spec, scoring formula |
| `set_up_env.sh` | Environment setup (install deps, check CUDA) |
| `download_datasets.sh` | Download datasets (for local pre-download) |
| `evaluate.sh` | Evaluation entry point |
| `run_evaluator.sh` | Invokes `evaluator.py` |
| `evaluator.py` | Core evaluation logic |
| `resources/` | Baseline code, benchmark, test data |

### config.yaml Example

```yaml

dependencies:

  uv_project: resources    # Optional: uv project in resources/

datasets: []               # Optional: dataset URLs

tag: hpc                   # Category: os, hpc, ai, db, pl, security

runtime:

  docker:

    image: andylizf/triton-tlx:tlx-nv-cu122

    gpu: true

  timeout_seconds: 1800

```

## Evaluation Flow

Inside the Docker container, the execution order is:

```

1. set_up_env.sh         β†’  Initialize environment

2. Copy solution.py      β†’  /work/execution_env/solution_env/

3. evaluate.sh           β†’  Check files, call run_evaluator.sh

4. run_evaluator.sh      β†’  python3 evaluator.py

5. evaluator.py          β†’  Load Solution.solve(), run benchmark, print score

```

The final score is extracted from the last numeric line of stdout.

## Solution Interface

Submit a `solution.py` implementing the `Solution` class. The interface varies by problem type:

### Triton Kernel Problems (flash_attn, cross_entropy, gemm_optimization...)



```python

class Solution:

    def solve(self, spec_path: str = None) -> dict:
        """

        Returns either:

        - {"code": "python_code_string"}

        - {"program_path": "path/to/kernel.py"}

        """

        kernel_code = '''

import triton

import triton.language as tl


@triton.jit
def my_kernel(...):

    ...



def entry_function(...):
    ...

'''

        return {"code": kernel_code}

```


### ML Training Problems (imagenet_pareto...)



```python

class Solution:

    def solve(self, train_loader, val_loader, metadata: dict) -> torch.nn.Module:

        """

        Train and return a model.



        metadata contains: num_classes, input_dim, param_limit,
                          baseline_accuracy, device, etc.

        """

        model = MyModel(...)

        # training loop

        return model

```


### Other Problems

Check each problem's `readme` for the specific `solve()` signature and return type.