Bump version: → 0.13.1.dev1 (#4025)
Adds _internal_proc initialization in Service (#4029) * Adds _internal_proc initialization in Service
Improve wandb.init() error message (#4030)
Update pypi package readme shields (#4036)
Revert "Ensures `metadata` passed to `Artifact()` is a legal dict" (#4039) Revert "ensure `metadata` passed to `Artifact()` is a legal dict (#3975)" This reverts commit 5e908d568fb3c972feb0681e035ddd258c7bfec5.
Improve naming of standalone tests (#4040)
Adds CHANGELOG for release 0.13.0 (#4031) * init changelog * remove change
Fixes PL tests to use the new api (#4042) * fix to the new api
Bump version: 0.13.1.dev1 → 0.13.0
Prevent run.log() from mutating passed in arguments (#4058)
Bump version: 0.13.0 → 0.13.1.dev1
Bump version: 0.13.1.dev1 → 0.13.1
add change log
fix changelog
@@ -1,3 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
## 0.12.21 (July 5, 2022)
|
2 |
|
3 |
#### :nail_care: Enhancement
|
1 |
+
## 0.13.1 (August 5, 2022)
|
2 |
+
|
3 |
+
#### :bug: Bug Fix
|
4 |
+
* Prevents run.log() from mutating passed in arguments by @kptkin in https://github.com/wandb/wandb/pull/4058
|
5 |
+
|
6 |
+
**Full Changelog**: https://github.com/wandb/wandb/compare/v0.13.0...v0.13.1
|
7 |
+
|
8 |
+
## 0.13.0 (August 3, 2022)
|
9 |
+
|
10 |
+
#### :nail_care: Enhancement
|
11 |
+
* Turns service on by default by @kptkin in https://github.com/wandb/wandb/pull/3895
|
12 |
+
* Adds support logic for handling server provided messages by @kptkin in https://github.com/wandb/wandb/pull/3706
|
13 |
+
* Allows runs to produce jobs on finish by @KyleGoyette in https://github.com/wandb/wandb/pull/3810
|
14 |
+
* Adds Job, QueuedRun and job handling in launch by @KyleGoyette in https://github.com/wandb/wandb/pull/3809
|
15 |
+
* Supports in launch agent of instance roles in ec2 and eks by @KyleGoyette in https://github.com/wandb/wandb/pull/3596
|
16 |
+
* Adds default behavior to the Keras Callback: always save model checkpoints as artifacts by @vwrj in https://github.com/wandb/wandb/pull/3909
|
17 |
+
* Sanitizes the artifact name in the KerasCallback for model artifact saving by @vwrj in https://github.com/wandb/wandb/pull/3927
|
18 |
+
* Improves console logging by moving emulator to the service process by @raubitsj in https://github.com/wandb/wandb/pull/3828
|
19 |
+
* Fixes data corruption issue when logging large sizes of data by @kptkin in https://github.com/wandb/wandb/pull/3920
|
20 |
+
* Adds the state to the Sweep repr in the Public API by @hu-po in https://github.com/wandb/wandb/pull/3948
|
21 |
+
* Adds an option to specify different root dir for git using settings or environment variables by @bcsherma in https://github.com/wandb/wandb/pull/3250
|
22 |
+
* Adds an option to pass `remote url` and `commit hash` as arguments to settings or as environment variables by @kptkin in https://github.com/wandb/wandb/pull/3934
|
23 |
+
* Improves time resolution for tracked metrics and for system metrics by @raubitsj in https://github.com/wandb/wandb/pull/3918
|
24 |
+
* Defaults to project name from the sweep config when project is not specified in the `wandb.sweep()` call by @hu-po in https://github.com/wandb/wandb/pull/3919
|
25 |
+
* Adds support to use namespace set user by the the launch agent by @KyleGoyette in https://github.com/wandb/wandb/pull/3950
|
26 |
+
* Adds telemetry to track when a run might be overwritten by @raubitsj in https://github.com/wandb/wandb/pull/3998
|
27 |
+
* Adds a tool to export `wandb`'s history into `sqlite` by @raubitsj in https://github.com/wandb/wandb/pull/3999
|
28 |
+
* Replaces some `Mapping[str, ...]` types with `NamedTuples` by @speezepearson in https://github.com/wandb/wandb/pull/3996
|
29 |
+
* Adds import hook for run telemetry by @kptkin in https://github.com/wandb/wandb/pull/3988
|
30 |
+
* Implements profiling support for IPUs by @cameron-martin in https://github.com/wandb/wandb/pull/3897
|
31 |
+
#### :bug: Bug Fix
|
32 |
+
* Fixes sweep agent with service by @raubitsj in https://github.com/wandb/wandb/pull/3899
|
33 |
+
* Fixes an empty type equals invalid type and how artifact dictionaries are handled by @KyleGoyette in https://github.com/wandb/wandb/pull/3904
|
34 |
+
* Fixes `wandb.Config` object to support default values when getting an attribute by @farizrahman4u in https://github.com/wandb/wandb/pull/3820
|
35 |
+
* Removes default config from jobs by @KyleGoyette in https://github.com/wandb/wandb/pull/3973
|
36 |
+
* Fixes an issue where patch is `None` by @KyleGoyette in https://github.com/wandb/wandb/pull/4003
|
37 |
+
* Fixes requirements.txt parsing in nightly SDK installation checks by @dmitryduev in https://github.com/wandb/wandb/pull/4012
|
38 |
+
* Fixes 409 Conflict handling when GraphQL requests timeout by @raubitsj in https://github.com/wandb/wandb/pull/4000
|
39 |
+
* Fixes service teardown handling if user process has been terminated by @raubitsj in https://github.com/wandb/wandb/pull/4024
|
40 |
+
* Adds `storage_path` and fixed `artifact.files` by @vanpelt in https://github.com/wandb/wandb/pull/3969
|
41 |
+
* Fixes performance issue syncing runs with a large number of media files by @vanpelt in https://github.com/wandb/wandb/pull/3941
|
42 |
+
#### :broom: Cleanup
|
43 |
+
* Adds an escape hatch logic to disable service by @kptkin in https://github.com/wandb/wandb/pull/3829
|
44 |
+
* Annotates `wandb/docker` and reverts change in the docker fixture by @dmitryduev in https://github.com/wandb/wandb/pull/3871
|
45 |
+
* Fixes GFLOPS to GFLOPs in the Keras `WandbCallback` by @ayulockin in https://github.com/wandb/wandb/pull/3913
|
46 |
+
* Adds type-annotate for `file_stream.py` by @dmitryduev in https://github.com/wandb/wandb/pull/3907
|
47 |
+
* Renames repository from `client` to `wandb` by @dmitryduev in https://github.com/wandb/wandb/pull/3977
|
48 |
+
* Updates documentation: adding `--report_to wandb` for HuggingFace Trainer by @ayulockin in https://github.com/wandb/wandb/pull/3959
|
49 |
+
* Makes aliases optional in link_artifact by @vwrj in https://github.com/wandb/wandb/pull/3986
|
50 |
+
* Renames `wandb local` to `wandb server` by @jsbroks in https://github.com/wandb/wandb/pull/3793
|
51 |
+
* Updates README badges by @raubitsj in https://github.com/wandb/wandb/pull/4023
|
52 |
+
|
53 |
+
## New Contributors
|
54 |
+
* @bcsherma made their first contribution in https://github.com/wandb/wandb/pull/3250
|
55 |
+
* @cameron-martin made their first contribution in https://github.com/wandb/wandb/pull/3897
|
56 |
+
|
57 |
+
**Full Changelog**: https://github.com/wandb/wandb/compare/v0.12.21...v0.13.0
|
58 |
+
|
59 |
## 0.12.21 (July 5, 2022)
|
60 |
|
61 |
#### :nail_care: Enhancement
|
@@ -2,7 +2,7 @@
|
|
2 |
<img src="https://i.imgur.com/RUtiVzH.png" width="600" /><br><br>
|
3 |
</div>
|
4 |
|
5 |
-
# Weights and Biases [](https://pypi.python.org/pypi/wandb) [](https://anaconda.org/conda-forge/wandb) [](https://circleci.com/gh/wandb/wandb) [](https://codecov.io/gh/wandb/wandb)
|
6 |
|
7 |
Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models.
|
8 |
|
@@ -1,5 +1,5 @@
|
|
1 |
[bumpversion]
|
2 |
-
current_version = 0.13.
|
3 |
commit = True
|
4 |
tag = False
|
5 |
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)((?P<prekind>[a-z]+)(?P<pre>\d+))?(\.(?P<devkind>[a-z]+)(?P<dev>\d+))?
|
1 |
[bumpversion]
|
2 |
+
current_version = 0.13.1
|
3 |
commit = True
|
4 |
tag = False
|
5 |
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)((?P<prekind>[a-z]+)(?P<pre>\d+))?(\.(?P<devkind>[a-z]+)(?P<dev>\d+))?
|
@@ -50,7 +50,7 @@
|
|
50 |
|
51 |
setup(
|
52 |
name="wandb",
|
53 |
-
version="0.13.
|
54 |
description="A CLI and library for interacting with the Weights and Biases API.",
|
55 |
long_description=readme,
|
56 |
long_description_content_type="text/markdown",
|
50 |
|
51 |
setup(
|
52 |
name="wandb",
|
53 |
+
version="0.13.1",
|
54 |
description="A CLI and library for interacting with the Weights and Biases API.",
|
55 |
long_description=readme,
|
56 |
long_description_content_type="text/markdown",
|
@@ -29,8 +29,9 @@ def main():
|
|
29 |
# Initialize a trainer
|
30 |
trainer = pl.Trainer(
|
31 |
max_epochs=1,
|
32 |
-
|
33 |
-
accelerator="
|
|
|
34 |
logger=wandb_logger,
|
35 |
)
|
36 |
|
29 |
# Initialize a trainer
|
30 |
trainer = pl.Trainer(
|
31 |
max_epochs=1,
|
32 |
+
devices=2,
|
33 |
+
accelerator="cpu",
|
34 |
+
strategy="ddp",
|
35 |
logger=wandb_logger,
|
36 |
)
|
37 |
|
@@ -26,7 +26,8 @@ def main():
|
|
26 |
# Initialize a trainer
|
27 |
trainer = Trainer(
|
28 |
max_epochs=1,
|
29 |
-
|
|
|
30 |
strategy="ddp_spawn",
|
31 |
logger=wandb_logger,
|
32 |
)
|
26 |
# Initialize a trainer
|
27 |
trainer = Trainer(
|
28 |
max_epochs=1,
|
29 |
+
devices=2,
|
30 |
+
accelerator="cpu",
|
31 |
strategy="ddp_spawn",
|
32 |
logger=wandb_logger,
|
33 |
)
|
@@ -1,6 +1,7 @@
|
|
1 |
#!/usr/bin/env python
|
2 |
|
3 |
import os
|
|
|
4 |
|
5 |
from pl_base import BoringModel, RandomDataset
|
6 |
from pytorch_lightning import Trainer
|
@@ -30,13 +31,17 @@ def main():
|
|
30 |
# set up wandb
|
31 |
config = dict(some_hparam="Logged Before Trainer starts DDP")
|
32 |
wandb_logger = WandbLogger(
|
33 |
-
log_model=True,
|
|
|
|
|
|
|
34 |
)
|
35 |
|
36 |
# Initialize a trainer
|
37 |
trainer = Trainer(
|
38 |
max_epochs=2,
|
39 |
-
|
|
|
40 |
strategy="ddp",
|
41 |
logger=wandb_logger,
|
42 |
)
|
1 |
#!/usr/bin/env python
|
2 |
|
3 |
import os
|
4 |
+
import pathlib
|
5 |
|
6 |
from pl_base import BoringModel, RandomDataset
|
7 |
from pytorch_lightning import Trainer
|
31 |
# set up wandb
|
32 |
config = dict(some_hparam="Logged Before Trainer starts DDP")
|
33 |
wandb_logger = WandbLogger(
|
34 |
+
log_model=True,
|
35 |
+
config=config,
|
36 |
+
save_code=True,
|
37 |
+
name=pathlib.Path(__file__).stem,
|
38 |
)
|
39 |
|
40 |
# Initialize a trainer
|
41 |
trainer = Trainer(
|
42 |
max_epochs=2,
|
43 |
+
devices=2,
|
44 |
+
accelerator="gpu",
|
45 |
strategy="ddp",
|
46 |
logger=wandb_logger,
|
47 |
)
|
@@ -27,7 +27,13 @@ def main():
|
|
27 |
wandb_logger = WandbLogger(log_model=True, config=config, save_code=True)
|
28 |
|
29 |
# Initialize a trainer
|
30 |
-
trainer = Trainer(
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
# Train the model
|
33 |
trainer.fit(model, train, val)
|
27 |
wandb_logger = WandbLogger(log_model=True, config=config, save_code=True)
|
28 |
|
29 |
# Initialize a trainer
|
30 |
+
trainer = Trainer(
|
31 |
+
max_epochs=1,
|
32 |
+
logger=wandb_logger,
|
33 |
+
accelerator="tpu",
|
34 |
+
devices=8,
|
35 |
+
strategy="ddp",
|
36 |
+
)
|
37 |
|
38 |
# Train the model
|
39 |
trainer.fit(model, train, val)
|
@@ -1,11 +1,13 @@
|
|
1 |
import os
|
|
|
2 |
import shutil
|
3 |
import time
|
4 |
|
5 |
import numpy as np
|
6 |
import wandb
|
7 |
|
8 |
|
|
|
9 |
init_count = 1
|
10 |
|
11 |
|
@@ -41,7 +43,7 @@ def test_artifact_run_lookup_apis():
|
|
41 |
artifact_2_name = f"a2-{str(time.time())}"
|
42 |
|
43 |
# Initial setup
|
44 |
-
run_1 = wandb.init(name=f"{
|
45 |
artifact = wandb.Artifact(artifact_1_name, "test_type")
|
46 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
47 |
run_1.log_artifact(artifact)
|
@@ -51,14 +53,14 @@ def test_artifact_run_lookup_apis():
|
|
51 |
run_1.finish()
|
52 |
|
53 |
# Create a second version for a1
|
54 |
-
run_2 = wandb.init(name=f"{
|
55 |
artifact = wandb.Artifact(artifact_1_name, "test_type")
|
56 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
57 |
run_2.log_artifact(artifact)
|
58 |
run_2.finish()
|
59 |
|
60 |
# Use both
|
61 |
-
run_3 = wandb.init(name=f"{
|
62 |
a1 = run_3.use_artifact(artifact_1_name + ":latest")
|
63 |
assert _runs_eq(a1.used_by(), [run_3])
|
64 |
assert _run_eq(a1.logged_by(), run_2)
|
@@ -68,7 +70,7 @@ def test_artifact_run_lookup_apis():
|
|
68 |
run_3.finish()
|
69 |
|
70 |
# Use both
|
71 |
-
run_4 = wandb.init(name=f"{
|
72 |
a1 = run_4.use_artifact(artifact_1_name + ":latest")
|
73 |
assert _runs_eq(a1.used_by(), [run_3, run_4])
|
74 |
a2 = run_4.use_artifact(artifact_2_name + ":latest")
|
@@ -80,19 +82,19 @@ def test_artifact_creation_with_diff_type():
|
|
80 |
artifact_name = f"a1-{str(time.time())}"
|
81 |
|
82 |
# create
|
83 |
-
with wandb.init(name=f"{
|
84 |
artifact = wandb.Artifact(artifact_name, "artifact_type_1")
|
85 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
86 |
run.log_artifact(artifact)
|
87 |
|
88 |
# update
|
89 |
-
with wandb.init(name=f"{
|
90 |
artifact = wandb.Artifact(artifact_name, "artifact_type_1")
|
91 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
92 |
run.log_artifact(artifact)
|
93 |
|
94 |
# invalid
|
95 |
-
with wandb.init(name=f"{
|
96 |
artifact = wandb.Artifact(artifact_name, "artifact_type_2")
|
97 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image_2")
|
98 |
did_err = False
|
@@ -106,7 +108,7 @@ def test_artifact_creation_with_diff_type():
|
|
106 |
)
|
107 |
assert did_err
|
108 |
|
109 |
-
with wandb.init(name=f"{
|
110 |
artifact = run.use_artifact(artifact_name + ":latest")
|
111 |
# should work
|
112 |
image = artifact.get("image")
|
1 |
import os
|
2 |
+
import pathlib
|
3 |
import shutil
|
4 |
import time
|
5 |
|
6 |
import numpy as np
|
7 |
import wandb
|
8 |
|
9 |
|
10 |
+
run_name_base = pathlib.Path(__file__).stem
|
11 |
init_count = 1
|
12 |
|
13 |
|
43 |
artifact_2_name = f"a2-{str(time.time())}"
|
44 |
|
45 |
# Initial setup
|
46 |
+
run_1 = wandb.init(name=f"{run_name_base}-{get_init_count()}")
|
47 |
artifact = wandb.Artifact(artifact_1_name, "test_type")
|
48 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
49 |
run_1.log_artifact(artifact)
|
53 |
run_1.finish()
|
54 |
|
55 |
# Create a second version for a1
|
56 |
+
run_2 = wandb.init(name=f"{run_name_base}-{get_init_count()}")
|
57 |
artifact = wandb.Artifact(artifact_1_name, "test_type")
|
58 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
59 |
run_2.log_artifact(artifact)
|
60 |
run_2.finish()
|
61 |
|
62 |
# Use both
|
63 |
+
run_3 = wandb.init(name=f"{run_name_base}-{get_init_count()}")
|
64 |
a1 = run_3.use_artifact(artifact_1_name + ":latest")
|
65 |
assert _runs_eq(a1.used_by(), [run_3])
|
66 |
assert _run_eq(a1.logged_by(), run_2)
|
70 |
run_3.finish()
|
71 |
|
72 |
# Use both
|
73 |
+
run_4 = wandb.init(name=f"{run_name_base}-{get_init_count()}")
|
74 |
a1 = run_4.use_artifact(artifact_1_name + ":latest")
|
75 |
assert _runs_eq(a1.used_by(), [run_3, run_4])
|
76 |
a2 = run_4.use_artifact(artifact_2_name + ":latest")
|
82 |
artifact_name = f"a1-{str(time.time())}"
|
83 |
|
84 |
# create
|
85 |
+
with wandb.init(name=f"{run_name_base}-{get_init_count()}") as run:
|
86 |
artifact = wandb.Artifact(artifact_name, "artifact_type_1")
|
87 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
88 |
run.log_artifact(artifact)
|
89 |
|
90 |
# update
|
91 |
+
with wandb.init(name=f"{run_name_base}-{get_init_count()}") as run:
|
92 |
artifact = wandb.Artifact(artifact_name, "artifact_type_1")
|
93 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image")
|
94 |
run.log_artifact(artifact)
|
95 |
|
96 |
# invalid
|
97 |
+
with wandb.init(name=f"{run_name_base}-{get_init_count()}") as run:
|
98 |
artifact = wandb.Artifact(artifact_name, "artifact_type_2")
|
99 |
artifact.add(wandb.Image(np.random.randint(0, 255, (10, 10))), "image_2")
|
100 |
did_err = False
|
108 |
)
|
109 |
assert did_err
|
110 |
|
111 |
+
with wandb.init(name=f"{run_name_base}-{get_init_count()}") as run:
|
112 |
artifact = run.use_artifact(artifact_name + ":latest")
|
113 |
# should work
|
114 |
image = artifact.get("image")
|
@@ -1,8 +1,10 @@
|
|
|
|
|
|
1 |
import wandb
|
2 |
|
3 |
|
4 |
def main():
|
5 |
-
run = wandb.init(name=__file__)
|
6 |
run.log({"boom": 1})
|
7 |
run.finish()
|
8 |
|
1 |
+
import pathlib
|
2 |
+
|
3 |
import wandb
|
4 |
|
5 |
|
6 |
def main():
|
7 |
+
run = wandb.init(name=pathlib.Path(__file__).stem)
|
8 |
run.log({"boom": 1})
|
9 |
run.finish()
|
10 |
|
@@ -1,4 +1,5 @@
|
|
1 |
import math
|
|
|
2 |
import random
|
3 |
import sys
|
4 |
|
@@ -7,7 +8,7 @@
|
|
7 |
|
8 |
def main(argv):
|
9 |
# wandb.init(entity="wandb", project="new-plots-test-5")
|
10 |
-
wandb.init(name=__file__)
|
11 |
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]
|
12 |
table = wandb.Table(data=data, columns=["step", "height"])
|
13 |
line_plot = wandb.plot.line(
|
1 |
import math
|
2 |
+
import pathlib
|
3 |
import random
|
4 |
import sys
|
5 |
|
8 |
|
9 |
def main(argv):
|
10 |
# wandb.init(entity="wandb", project="new-plots-test-5")
|
11 |
+
wandb.init(name=pathlib.Path(__file__).stem)
|
12 |
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]
|
13 |
table = wandb.Table(data=data, columns=["step", "height"])
|
14 |
line_plot = wandb.plot.line(
|
@@ -1,3 +1,5 @@
|
|
|
|
|
|
1 |
import keras # noqa: F401
|
2 |
import numpy as np
|
3 |
import tensorflow as tf
|
@@ -6,7 +8,7 @@
|
|
6 |
|
7 |
|
8 |
def main():
|
9 |
-
wandb.init(name=__file__)
|
10 |
|
11 |
model = tf.keras.models.Sequential()
|
12 |
model.add(tf.keras.layers.Conv2D(3, 3, activation="relu", input_shape=(28, 28, 1)))
|
1 |
+
import pathlib
|
2 |
+
|
3 |
import keras # noqa: F401
|
4 |
import numpy as np
|
5 |
import tensorflow as tf
|
8 |
|
9 |
|
10 |
def main():
|
11 |
+
wandb.init(name=pathlib.Path(__file__).stem)
|
12 |
|
13 |
model = tf.keras.models.Sequential()
|
14 |
model.add(tf.keras.layers.Conv2D(3, 3, activation="relu", input_shape=(28, 28, 1)))
|
@@ -1,9 +1,11 @@
|
|
|
|
|
|
1 |
import numpy as np
|
2 |
import torch
|
3 |
import wandb
|
4 |
|
5 |
|
6 |
-
run = wandb.init(name=__file__)
|
7 |
run.log({"cuda_available": torch.cuda.is_available()})
|
8 |
x = np.random.random((32, 100)).astype("f")
|
9 |
t_cpu = torch.Tensor(x)
|
1 |
+
import pathlib
|
2 |
+
|
3 |
import numpy as np
|
4 |
import torch
|
5 |
import wandb
|
6 |
|
7 |
|
8 |
+
run = wandb.init(name=pathlib.Path(__file__).stem)
|
9 |
run.log({"cuda_available": torch.cuda.is_available()})
|
10 |
x = np.random.random((32, 100)).astype("f")
|
11 |
t_cpu = torch.Tensor(x)
|
@@ -1,3 +1,4 @@
|
|
|
|
1 |
import time
|
2 |
|
3 |
from memory_profiler import profile
|
@@ -17,7 +18,7 @@ def main(count: int, size=(32, 32, 3)) -> wandb.Table:
|
|
17 |
|
18 |
|
19 |
if __name__ == "__main__":
|
20 |
-
run = wandb.init(name=__file__)
|
21 |
for c in range(4):
|
22 |
cnt = 2 * (10**c)
|
23 |
start = time.time()
|
1 |
+
import pathlib
|
2 |
import time
|
3 |
|
4 |
from memory_profiler import profile
|
18 |
|
19 |
|
20 |
if __name__ == "__main__":
|
21 |
+
run = wandb.init(name=pathlib.Path(__file__).stem)
|
22 |
for c in range(4):
|
23 |
cnt = 2 * (10**c)
|
24 |
start = time.time()
|
@@ -5,7 +5,7 @@
|
|
5 |
|
6 |
|
7 |
def main():
|
8 |
-
wandb.init(name=__file__)
|
9 |
|
10 |
# Get a pandas DataFrame object of all the data in the csv file:
|
11 |
df = pd.read_csv(pathlib.Path(__file__).parent.resolve() / "tweets.csv")
|
5 |
|
6 |
|
7 |
def main():
|
8 |
+
wandb.init(name=pathlib.Path(__file__).stem)
|
9 |
|
10 |
# Get a pandas DataFrame object of all the data in the csv file:
|
11 |
df = pd.read_csv(pathlib.Path(__file__).parent.resolve() / "tweets.csv")
|
@@ -1,7 +1,6 @@
|
|
1 |
import base64
|
2 |
import hashlib
|
3 |
import os
|
4 |
-
from typing import Any, Callable, Optional, Type
|
5 |
import pytest
|
6 |
from wandb import util
|
7 |
import wandb
|
@@ -1336,7 +1335,7 @@ def test_lazy_artifact_passthrough(runner, live_mock_server, test_settings):
|
|
1336 |
|
1337 |
for setter in testable_setters_valid + testable_setters_invalid:
|
1338 |
with pytest.raises(ValueError):
|
1339 |
-
setattr(art, setter,
|
1340 |
|
1341 |
for method in testable_methods_valid + testable_methods_invalid:
|
1342 |
attr_method = getattr(art, method)
|
@@ -1350,7 +1349,7 @@ def test_lazy_artifact_passthrough(runner, live_mock_server, test_settings):
|
|
1350 |
_ = getattr(art, getter)
|
1351 |
|
1352 |
for setter in testable_setters_valid + testable_setters_invalid:
|
1353 |
-
setattr(art, setter,
|
1354 |
|
1355 |
for method in testable_methods_valid + testable_methods_invalid:
|
1356 |
attr_method = getattr(art, method)
|
@@ -1397,54 +1396,3 @@ def test_communicate_artifact(runner, publish_util, mocked_run):
|
|
1397 |
artifact_publish = dict(run=mocked_run, artifact=artifact, aliases=["latest"])
|
1398 |
ctx_util = publish_util(artifacts=[artifact_publish])
|
1399 |
assert len(set(ctx_util.manifests_created_ids)) == 1
|
1400 |
-
|
1401 |
-
|
1402 |
-
def _create_artifact_and_set_metadata(metadata):
|
1403 |
-
artifact = wandb.Artifact("foo", "dataset")
|
1404 |
-
artifact.metadata = metadata
|
1405 |
-
return artifact
|
1406 |
-
|
1407 |
-
|
1408 |
-
# All these metadata-validation tests should behave identically
|
1409 |
-
# regardless of whether we set the metadata by passing it into the constructor
|
1410 |
-
# or by setting the attribute after creation; so, parametrize how we build the
|
1411 |
-
# artifact, and run tests both ways.
|
1412 |
-
@pytest.mark.parametrize(
|
1413 |
-
"create_artifact",
|
1414 |
-
[
|
1415 |
-
lambda metadata: wandb.Artifact("foo", "dataset", metadata=metadata),
|
1416 |
-
_create_artifact_and_set_metadata,
|
1417 |
-
],
|
1418 |
-
)
|
1419 |
-
class TestArtifactChecksMetadata:
|
1420 |
-
def test_validates_metadata_ok(
|
1421 |
-
self, create_artifact: Callable[..., wandb.Artifact]
|
1422 |
-
):
|
1423 |
-
assert create_artifact(metadata=None).metadata == {}
|
1424 |
-
assert create_artifact(metadata={"foo": "bar"}).metadata == {"foo": "bar"}
|
1425 |
-
|
1426 |
-
def test_validates_metadata_err(
|
1427 |
-
self, create_artifact: Callable[..., wandb.Artifact]
|
1428 |
-
):
|
1429 |
-
with pytest.raises(TypeError):
|
1430 |
-
create_artifact(metadata=123)
|
1431 |
-
|
1432 |
-
with pytest.raises(TypeError):
|
1433 |
-
create_artifact(metadata=[])
|
1434 |
-
|
1435 |
-
with pytest.raises(TypeError):
|
1436 |
-
create_artifact(metadata={"unserializable": object()})
|
1437 |
-
|
1438 |
-
def test_deepcopies_metadata(self, create_artifact: Callable[..., wandb.Artifact]):
|
1439 |
-
orig_metadata = {"foo": ["original"]}
|
1440 |
-
artifact = create_artifact(metadata=orig_metadata)
|
1441 |
-
|
1442 |
-
# ensure `artifact.metadata` isn't just a reference to the argument
|
1443 |
-
assert artifact.metadata is not orig_metadata
|
1444 |
-
orig_metadata["bar"] = "modifying the top-level value"
|
1445 |
-
assert "bar" not in artifact.metadata
|
1446 |
-
|
1447 |
-
# ensure that any mutable sub-values are also copies
|
1448 |
-
assert artifact.metadata["foo"] is not orig_metadata["foo"]
|
1449 |
-
orig_metadata["foo"].append("modifying the sub-value")
|
1450 |
-
assert artifact.metadata["foo"] == ["original"]
|
1 |
import base64
|
2 |
import hashlib
|
3 |
import os
|
|
|
4 |
import pytest
|
5 |
from wandb import util
|
6 |
import wandb
|
1335 |
|
1336 |
for setter in testable_setters_valid + testable_setters_invalid:
|
1337 |
with pytest.raises(ValueError):
|
1338 |
+
setattr(art, setter, "TEST")
|
1339 |
|
1340 |
for method in testable_methods_valid + testable_methods_invalid:
|
1341 |
attr_method = getattr(art, method)
|
1349 |
_ = getattr(art, getter)
|
1350 |
|
1351 |
for setter in testable_setters_valid + testable_setters_invalid:
|
1352 |
+
setattr(art, setter, "TEST")
|
1353 |
|
1354 |
for method in testable_methods_valid + testable_methods_invalid:
|
1355 |
attr_method = getattr(art, method)
|
1396 |
artifact_publish = dict(run=mocked_run, artifact=artifact, aliases=["latest"])
|
1397 |
ctx_util = publish_util(artifacts=[artifact_publish])
|
1398 |
assert len(set(ctx_util.manifests_created_ids)) == 1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -27,6 +27,13 @@ def test_run_step_property(fake_run):
|
|
27 |
assert run.step == 2
|
28 |
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
def test_deprecated_run_log_sync(fake_run, capsys):
|
31 |
run = fake_run()
|
32 |
run.log(dict(this=1), sync=True)
|
27 |
assert run.step == 2
|
28 |
|
29 |
|
30 |
+
def test_log_avoids_mutation(fake_run):
|
31 |
+
run = fake_run()
|
32 |
+
d = dict(this=1)
|
33 |
+
run.log(d)
|
34 |
+
assert d == dict(this=1)
|
35 |
+
|
36 |
+
|
37 |
def test_deprecated_run_log_sync(fake_run, capsys):
|
38 |
run = fake_run()
|
39 |
run.log(dict(this=1), sync=True)
|
@@ -72,7 +72,7 @@ def default_ctx():
|
|
72 |
"run_queues": {"1": []},
|
73 |
"num_popped": 0,
|
74 |
"num_acked": 0,
|
75 |
-
"max_cli_version": "0.
|
76 |
"runs": {},
|
77 |
"run_ids": [],
|
78 |
"file_names": [],
|
72 |
"run_queues": {"1": []},
|
73 |
"num_popped": 0,
|
74 |
"num_acked": 0,
|
75 |
+
"max_cli_version": "0.14.0",
|
76 |
"runs": {},
|
77 |
"run_ids": [],
|
78 |
"file_names": [],
|
@@ -13,7 +13,7 @@ envlist = black,
|
|
13 |
|
14 |
[base]
|
15 |
setenv =
|
16 |
-
YEA_WANDB_VERSION = 0.8.
|
17 |
|
18 |
[unitbase]
|
19 |
deps =
|
13 |
|
14 |
[base]
|
15 |
setenv =
|
16 |
+
YEA_WANDB_VERSION = 0.8.6
|
17 |
|
18 |
[unitbase]
|
19 |
deps =
|
@@ -11,7 +11,7 @@
|
|
11 |
|
12 |
For reference documentation, see https://docs.wandb.com/ref/python.
|
13 |
"""
|
14 |
-
__version__ = "0.13.
|
15 |
|
16 |
# Used with pypi checks and other messages related to pip
|
17 |
_wandb_module = "wandb"
|
11 |
|
12 |
For reference documentation, see https://docs.wandb.com/ref/python.
|
13 |
"""
|
14 |
+
__version__ = "0.13.1"
|
15 |
|
16 |
# Used with pypi checks and other messages related to pip
|
17 |
_wandb_module = "wandb"
|
@@ -26,6 +26,8 @@ def __init__(self, _use_grpc: bool = False) -> None:
|
|
26 |
self._stub = None
|
27 |
self._grpc_port = None
|
28 |
self._sock_port = None
|
|
|
|
|
29 |
# current code only supports grpc or socket server implementation, in the
|
30 |
# future we might be able to support both
|
31 |
if _use_grpc:
|
26 |
self._stub = None
|
27 |
self._grpc_port = None
|
28 |
self._sock_port = None
|
29 |
+
self._internal_proc = None
|
30 |
+
|
31 |
# current code only supports grpc or socket server implementation, in the
|
32 |
# future we might be able to support both
|
33 |
if _use_grpc:
|
@@ -1,7 +1,6 @@
|
|
1 |
import base64
|
2 |
import contextlib
|
3 |
import hashlib
|
4 |
-
import json
|
5 |
import os
|
6 |
import pathlib
|
7 |
import re
|
@@ -10,7 +9,6 @@
|
|
10 |
import time
|
11 |
from typing import (
|
12 |
Any,
|
13 |
-
cast,
|
14 |
Dict,
|
15 |
Generator,
|
16 |
IO,
|
@@ -78,14 +76,6 @@ def __init__(self, entry: ArtifactEntry, obj: data_types.WBValue):
|
|
78 |
self.obj = obj
|
79 |
|
80 |
|
81 |
-
def _normalize_metadata(metadata: Optional[Dict[str, Any]]) -> Dict[str, Any]:
|
82 |
-
if metadata is None:
|
83 |
-
return {}
|
84 |
-
if not isinstance(metadata, dict):
|
85 |
-
raise TypeError(f"metadata must be dict, not {type(metadata)}")
|
86 |
-
return cast(Dict[str, Any], json.loads(json.dumps(metadata)))
|
87 |
-
|
88 |
-
|
89 |
class Artifact(ArtifactInterface):
|
90 |
"""
|
91 |
Flexible and lightweight building block for dataset and model versioning.
|
@@ -148,7 +138,6 @@ def __init__(
|
|
148 |
"Artifact name may only contain alphanumeric characters, dashes, underscores, and dots. "
|
149 |
'Invalid name: "%s"' % name
|
150 |
)
|
151 |
-
metadata = _normalize_metadata(metadata)
|
152 |
# TODO: this shouldn't be a property of the artifact. It's a more like an
|
153 |
# argument to log_artifact.
|
154 |
storage_layout = StorageLayout.V2
|
@@ -174,7 +163,7 @@ def __init__(
|
|
174 |
self._type = type
|
175 |
self._name = name
|
176 |
self._description = description
|
177 |
-
self._metadata = metadata
|
178 |
self._distributed_id = None
|
179 |
self._logged_artifact = None
|
180 |
self._incremental = False
|
@@ -300,7 +289,6 @@ def metadata(self) -> dict:
|
|
300 |
|
301 |
@metadata.setter
|
302 |
def metadata(self, metadata: dict) -> None:
|
303 |
-
metadata = _normalize_metadata(metadata)
|
304 |
if self._logged_artifact:
|
305 |
self._logged_artifact.metadata = metadata
|
306 |
return
|
1 |
import base64
|
2 |
import contextlib
|
3 |
import hashlib
|
|
|
4 |
import os
|
5 |
import pathlib
|
6 |
import re
|
9 |
import time
|
10 |
from typing import (
|
11 |
Any,
|
|
|
12 |
Dict,
|
13 |
Generator,
|
14 |
IO,
|
76 |
self.obj = obj
|
77 |
|
78 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
class Artifact(ArtifactInterface):
|
80 |
"""
|
81 |
Flexible and lightweight building block for dataset and model versioning.
|
138 |
"Artifact name may only contain alphanumeric characters, dashes, underscores, and dots. "
|
139 |
'Invalid name: "%s"' % name
|
140 |
)
|
|
|
141 |
# TODO: this shouldn't be a property of the artifact. It's a more like an
|
142 |
# argument to log_artifact.
|
143 |
storage_layout = StorageLayout.V2
|
163 |
self._type = type
|
164 |
self._name = name
|
165 |
self._description = description
|
166 |
+
self._metadata = metadata or {}
|
167 |
self._distributed_id = None
|
168 |
self._logged_artifact = None
|
169 |
self._incremental = False
|
289 |
|
290 |
@metadata.setter
|
291 |
def metadata(self, metadata: dict) -> None:
|
|
|
292 |
if self._logged_artifact:
|
293 |
self._logged_artifact.metadata = metadata
|
294 |
return
|
@@ -676,8 +676,6 @@ def init(self) -> Union[Run, RunDisabled, None]: # noqa: C901
|
|
676 |
logger.error("backend process timed out")
|
677 |
error_message = "Error communicating with wandb process"
|
678 |
if active_start_method != "fork":
|
679 |
-
error_message += "\ntry: wandb.init(settings=wandb.Settings(start_method='fork'))"
|
680 |
-
error_message += "\nor: wandb.init(settings=wandb.Settings(start_method='thread'))"
|
681 |
error_message += (
|
682 |
f"\nFor more info see: {wburls.get('doc_start_err')}"
|
683 |
)
|
676 |
logger.error("backend process timed out")
|
677 |
error_message = "Error communicating with wandb process"
|
678 |
if active_start_method != "fork":
|
|
|
|
|
679 |
error_message += (
|
680 |
f"\nFor more info see: {wburls.get('doc_start_err')}"
|
681 |
)
|
@@ -1246,6 +1246,7 @@ def _partial_history_callback(
|
|
1246 |
step: Optional[int] = None,
|
1247 |
commit: Optional[bool] = None,
|
1248 |
) -> None:
|
|
|
1249 |
if row:
|
1250 |
row = self._visualization_hack(row)
|
1251 |
now = time.time()
|
1246 |
step: Optional[int] = None,
|
1247 |
commit: Optional[bool] = None,
|
1248 |
) -> None:
|
1249 |
+
row = row.copy()
|
1250 |
if row:
|
1251 |
row = self._visualization_hack(row)
|
1252 |
now = time.time()
|
Use W&B to build better models faster. Track and visualize all the pieces of your machine learning pipeline, from datasets to production models.
Sign up for a free account →
pip install wandb
In your training script:
import wandb
# Your custom arguments defined here
args = ...
wandb.init(config=args, project="my-project")
wandb.config["more"] = "custom"
def training_loop():
while True:
# Do some machine learning
epoch, loss, val_loss = ...
# Framework agnostic / custom metrics
wandb.log({"epoch": epoch, "loss": loss, "val_loss": val_loss})
If you're already using Tensorboard or TensorboardX, you can integrate with one line:
wandb.init(sync_tensorboard=True)
Run wandb login
from your terminal to signup or authenticate your machine (we store your api key in ~/.netrc). You can also set the WANDB_API_KEY
environment variable with a key from your settings.
Run your script with python my_script.py
and all metadata will be synced to the cloud. You will see a url in your terminal logs when your script starts and finishes. Data is staged locally in a directory named wandb relative to your script. If you want to test your script without syncing to the cloud you can set the environment variable WANDB_MODE=dryrun
.
If you are using docker to run your code, we provide a wrapper command wandb docker
that mounts your current directory, sets environment variables, and ensures the wandb library is installed. Training your models in docker gives you the ability to restore the exact code and environment with the wandb restore
command.
Sign up for a free account → Introduction video →
Framework specific and detailed usage can be found in our documentation.
To run basic test use make test
. More detailed information can be found at CONTRIBUTING.md.
We use circleci for CI.
If you'd like a free academic account for your research group, reach out to us →
We make it easy to cite W&B in your published paper. Learn more →
Got questions, feedback or want to join a community of ML engineers working on exciting projects?
<a href="https://bit.ly/wb-slack"><img src="https://svgshare.com/i/M93.svg" alt="slack" width="55"/></a> Join our slack community.
Follow us on Twitter.