Skip to content
This repository was archived by the owner on Apr 23, 2025. It is now read-only.

Commit 04193fc

Browse files
authored
Tensorflow 0.5 (#210)
* Use PascalCase for all Swift source files (#152) Except `main.swift` files which are the entry points for executables. Running the following command does not produce any output: `find . -type f -name "[[:lower:]]*.swift" | rg --pcre2 "^(?!.*(main.swift))"` Resolves #133. * Load config variable from `hparams.json` file so that Transformer can work with the bigger GPT-2 models (It is called "staged release". as of now, only 117M and 345M are available). (#154) * Style fixes from #154 (#155) * A few quick fixes to help unbreak swift-models. (#160) * Unbreak models by inserting `.call`. (#161) See https://bugs.swift.org/browse/TF-516 for additional context. * s/CotangentVector/TangentVector/g (#162) * Add MNIST test set evaluation and change hyperparameters. (#163) - Add MNIST test set evaluation. - The batch size and the optimizer are changed to `128` and `Adam`, respectively, based on test set evaluation results. Tested with higher batch sizes [256, 512], but found no improvement in performance. * Updated models to use revised name for callable (SE-0253), fixed a few issues caused by `swift-apis` changes. (#166) * Updated Transformer and MNIST models to work with 2019-06-04 development snapshot. * Updated other models. * Updated ResNet to work with 2019-06-04 development snapshot. * Updated Catch and CIFAR models. * Added a method to satisfy TensorGroup protocol that can pass build. * transformer: upstream api changes (#171) * rebuild resnet block based approach (#170) * add gym blackjack qlearning demo (#173) * Make 'GoModel' stored properties be variables. (#174) New 'Differentiable' derived conformances will not include constant properties in the tangent space. * Added .swift-format and updated .gitignore for Xcode 11's SwiftPM support. (#176) This replicates the following pull request on swift-apis: tensorflow/swift-apis#374 , adding a `swift-format` configuration file. Note the cautions indicated in that pull request around use of `swift-format` at present. This also adds a .gitignore line to prevent Xcode 11's new SwiftPM support from adding supporting files to the repository. * Replaced Python with Swift in CIFAR10 dataset loading (#178) * Replaced all Python code in the CIFAR10 and ResNet examples, removing Python 3 dependency. * Needed to import FoundationNetworking on Linux. * Added a check for FoundationNetworking, added an early exit for cached directory check. * Removed macOS availability check by targeting 10.13 in the package. * Style and formatting fixes. * Removed no-longer-needed _tensorHandles and supporting code. * Replace Autoencoder's tanh activation with sigmoid (#180) * Add GAN Example (#181) * Add GAN Example * Remove do blocks * Replace lrelu with closure * Remove typealiases * Add comment * Remove labels in GAN example * Rename variables * Code formatting, update comments * Fix: latentDim -> latentSize * Add spaces around * * Code formatting, rename loss functions * Fix: generatorLossFunc -> generatorLoss * Break lines to make it fit within 100 columns * Update comments * Rename plot->plotImage, imageGrid->gridImage, Code formatting * Refactor: Label creation * Update comment * Remove type parameter, empty line * Update readme * nightlies URL: s4tf-kokoro-artifact-testing => swift-tensorflow-artifacts (#183) * First steps in repository reorganization: extracting common MNIST dataset code (#182) * Extracted MNIST dataset, created LeNet network, added example combining the two. * Extracted redundant MNIST loading code from GAN and Autoencoder examples, replaced with central MNIST dataset. * Renamed input parameters and applied standard formatting style to MNIST. * Punctuation correction. Co-Authored-By: Richard Wei <[email protected]> * README formatting update. Co-Authored-By: Richard Wei <[email protected]> * Renamed trainImages -> trainingImages, corrected Python package names, formatted Package.swift. * Update Dockerfile to install Python libraries (#184) * Delete helpers.swift (#187) This file is no longer needed apple/swift #26023 and may be causing the segfault reported in swift-apis #186. * Continuing repository reorganization: extracting common CIFAR-10 and ResNet code (#185) * Extracted CIFAR-10 dataset and ResNet models into respective modules. * Minor formatting fixes. * Mirroring PR #187. * [Models] Fix enumeration of blocks in `ResidualBasicBlockStack` `init` (#192) * [Models] Fix enumeration of blocks in `ResidualBasicBlockStack` initialization * Start range at * Update ranges * SqueezeNet Implementation (#189) * Add files via upload * Implemented requested changes Corrected formatting mistakes, used dropout correctly, removed the hard coded number of classes and improved overall readability. * Update SqeezeNet.swift Renamed numClasses to classCount and limited all lines to a size of 100. * WideResNet - fix widenFactor and match model to citation (#193) * add identity connections to WideResNet * rename preact1 for clarity * remove extra relu, add dropout * fix declarartion * skip dropout in expansion blocks * remove enum, res layers to one line * Removal of deprecated allDifferentiableVariables. (#194) * Add JPEG loading / saving via Raw TF operations, removing Matplotlib dependencies (#188) * Added an Image struct as a wrapper for JPEG loading / saving, removed matplotlib dependency from GAN and Autoencoder examples using this. * Formatting update for Autoencoder. * Bring this inline with current API. * Made saveImage() a throwing function, improved formatting. * Changed function parameter. * Convert MNIST classifier to use sequential (#200) * Convert MNIST classifier to use sequential * Review Changes * Remove ImageClassification Models * Convert Autoencoder and Catch to use Sequential (#203) Partially fixes #202. * Adding shape-based inference tests for all image classification models (#198) * Implemented inference tests with random tensors and starting weights for all classification models. * Made sure tests ran on Linux, reshaped output of SqueezeNet to match other classification models. * WideResNet is expressed in terms of CIFAR10, so altered the inputs and outputs appropriately. * Wrapping the reshaping line in SqueezeNet. * Reset ownership. * Reworked TensorShape initializers to use array literals. * Minor formatting tweak to SqueezeNet. * Update deprecated APIs in Catch (#205) [swift-apis/#497](tensorflow/swift-apis#497) removes deprecated apis, now that we are moving onward to v0.5. This PR updates the catch file which uses a deprecated dense layer. * Update squeezenet and add V1.1 version (#204) * Update squeezenet and add V1.1 version * value error * Update tests * review changes * Fix '@noDerivative' warnings. (#208) (#209) (cherry picked from commit db72e4d)
1 parent 1d0baef commit 04193fc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

57 files changed

+2140
-1500
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,7 @@
55
*.xcodeproj
66
*.png
77
.DS_Store
8+
.swiftpm
89
cifar-10-batches-py/
10+
cifar-10-batches-bin/
11+
output/

.swift-format

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
{
2+
"version": 1,
3+
"lineLength": 100,
4+
"indentation": {
5+
"spaces": 4
6+
},
7+
"maximumBlankLines": 1,
8+
"respectsExistingLineBreaks": true,
9+
"blankLineBetweenMembers": {
10+
"ignoreSingleLineProperties": true
11+
},
12+
"lineBreakBeforeControlFlowKeywords": false,
13+
"lineBreakBeforeEachArgument": false
14+
}

Autoencoder/README.md

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# Simple Autoencoder
22

3+
This is an example of a simple 1-dimensional autoencoder model, using MNIST as a training dataset. It should produce output similar to the following:
4+
35
### Epoch 1
46
<p align="center">
57
<img src="images/epoch-1-input.png" height="270" width="360">
@@ -12,7 +14,6 @@
1214
<img src="images/epoch-10-output.png" height="270" width="360">
1315
</p>
1416

15-
This directory builds a simple 1-dimensional autoencoder model.
1617

1718
## Setup
1819

@@ -23,12 +24,5 @@ installed. Make sure you've added the correct version of `swift` to your path.
2324
To train the model, run:
2425

2526
```
26-
swift run Autoencoder
27-
```
28-
If you using brew to install python2 and modules, change the path:
29-
- remove brew path '/usr/local/bin'
30-
- add TensorFlow swift Toolchain /Library/Developer/Toolchains/swift-latest/usr/bin
31-
27+
swift run -c release Autoencoder
3228
```
33-
export PATH=/Library/Developer/Toolchains/swift-latest/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:"${PATH}"
34-
```

Autoencoder/main.swift

Lines changed: 33 additions & 98 deletions
Original file line numberDiff line numberDiff line change
@@ -12,126 +12,61 @@
1212
// See the License for the specific language governing permissions and
1313
// limitations under the License.
1414

15+
import Datasets
1516
import Foundation
17+
import ModelSupport
1618
import TensorFlow
17-
import Python
1819

19-
// Import Python modules
20-
let matplotlib = Python.import("matplotlib")
21-
let np = Python.import("numpy")
22-
let plt = Python.import("matplotlib.pyplot")
23-
24-
// Turn off using display on server / linux
25-
matplotlib.use("Agg")
26-
27-
// Some globals
2820
let epochCount = 10
2921
let batchSize = 100
30-
let outputFolder = "./output/"
31-
let imageHeight = 28, imageWidth = 28
32-
33-
func plot(image: [Float], name: String) {
34-
// Create figure
35-
let ax = plt.gca()
36-
let array = np.array([image])
37-
let pixels = array.reshape([imageHeight, imageWidth])
38-
if !FileManager.default.fileExists(atPath: outputFolder) {
39-
try! FileManager.default.createDirectory(atPath: outputFolder,
40-
withIntermediateDirectories: false,
41-
attributes: nil)
42-
}
43-
ax.imshow(pixels, cmap: "gray")
44-
plt.savefig("\(outputFolder)\(name).png", dpi: 300)
45-
plt.close()
46-
}
22+
let imageHeight = 28
23+
let imageWidth = 28
4724

48-
/// Reads a file into an array of bytes.
49-
func readFile(_ filename: String) -> [UInt8] {
50-
let possibleFolders = [".", "Resources", "Autoencoder/Resources"]
51-
for folder in possibleFolders {
52-
let parent = URL(fileURLWithPath: folder)
53-
let filePath = parent.appendingPathComponent(filename).path
54-
guard FileManager.default.fileExists(atPath: filePath) else {
55-
continue
56-
}
57-
let d = Python.open(filePath, "rb").read()
58-
return Array(numpy: np.frombuffer(d, dtype: np.uint8))!
59-
}
60-
print("Failed to find file with name \(filename) in the following folders: \(possibleFolders).")
61-
exit(-1)
62-
}
63-
64-
/// Reads MNIST images and labels from specified file paths.
65-
func readMNIST(imagesFile: String, labelsFile: String) -> (images: Tensor<Float>,
66-
labels: Tensor<Int32>) {
67-
print("Reading data.")
68-
let images = readFile(imagesFile).dropFirst(16).map { Float($0) }
69-
let labels = readFile(labelsFile).dropFirst(8).map { Int32($0) }
70-
let rowCount = labels.count
71-
72-
print("Constructing data tensors.")
73-
return (
74-
images: Tensor(shape: [rowCount, imageHeight * imageWidth], scalars: images) / 255.0,
75-
labels: Tensor(labels)
76-
)
77-
}
78-
79-
/// An autoencoder.
80-
struct Autoencoder: Layer {
81-
typealias Input = Tensor<Float>
82-
typealias Output = Tensor<Float>
83-
84-
var encoder1 = Dense<Float>(inputSize: imageHeight * imageWidth, outputSize: 128,
85-
activation: relu)
86-
var encoder2 = Dense<Float>(inputSize: 128, outputSize: 64, activation: relu)
87-
var encoder3 = Dense<Float>(inputSize: 64, outputSize: 12, activation: relu)
88-
var encoder4 = Dense<Float>(inputSize: 12, outputSize: 3, activation: relu)
89-
90-
var decoder1 = Dense<Float>(inputSize: 3, outputSize: 12, activation: relu)
91-
var decoder2 = Dense<Float>(inputSize: 12, outputSize: 64, activation: relu)
92-
var decoder3 = Dense<Float>(inputSize: 64, outputSize: 128, activation: relu)
93-
var decoder4 = Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth,
94-
activation: tanh)
95-
96-
@differentiable
97-
func call(_ input: Input) -> Output {
98-
let encoder = input.sequenced(through: encoder1, encoder2, encoder3, encoder4)
99-
return encoder.sequenced(through: decoder1, decoder2, decoder3, decoder4)
100-
}
101-
}
102-
103-
// MNIST data logic
104-
func minibatch<Scalar>(in x: Tensor<Scalar>, at index: Int) -> Tensor<Scalar> {
105-
let start = index * batchSize
106-
return x[start..<start+batchSize]
25+
let outputFolder = "./output/"
26+
let dataset = MNIST(batchSize: batchSize, flattening: true)
27+
// An autoencoder.
28+
var autoencoder = Sequential {
29+
// The encoder.
30+
Dense<Float>(inputSize: imageHeight * imageWidth, outputSize: 128, activation: relu)
31+
Dense<Float>(inputSize: 128, outputSize: 64, activation: relu)
32+
Dense<Float>(inputSize: 64, outputSize: 12, activation: relu)
33+
Dense<Float>(inputSize: 12, outputSize: 3, activation: relu)
34+
// The decoder.
35+
Dense<Float>(inputSize: 3, outputSize: 12, activation: relu)
36+
Dense<Float>(inputSize: 12, outputSize: 64, activation: relu)
37+
Dense<Float>(inputSize: 64, outputSize: 128, activation: relu)
38+
Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth, activation: tanh)
10739
}
108-
109-
let (images, numericLabels) = readMNIST(imagesFile: "train-images-idx3-ubyte",
110-
labelsFile: "train-labels-idx1-ubyte")
111-
let labels = Tensor<Float>(oneHotAtIndices: numericLabels, depth: 10)
112-
113-
var autoencoder = Autoencoder()
11440
let optimizer = RMSProp(for: autoencoder)
11541

11642
// Training loop
11743
for epoch in 1...epochCount {
118-
let sampleImage = Tensor(shape: [1, imageHeight * imageWidth], scalars: images[epoch].scalars)
44+
let sampleImage = Tensor(
45+
shape: [1, imageHeight * imageWidth], scalars: dataset.trainingImages[epoch].scalars)
11946
let testImage = autoencoder(sampleImage)
12047

121-
plot(image: sampleImage.scalars, name: "epoch-\(epoch)-input")
122-
plot(image: testImage.scalars, name: "epoch-\(epoch)-output")
48+
do {
49+
try saveImage(
50+
sampleImage, size: (imageWidth, imageHeight), directory: outputFolder,
51+
name: "epoch-\(epoch)-input")
52+
try saveImage(
53+
testImage, size: (imageWidth, imageHeight), directory: outputFolder,
54+
name: "epoch-\(epoch)-output")
55+
} catch {
56+
print("Could not save image with error: \(error)")
57+
}
12358

12459
let sampleLoss = meanSquaredError(predicted: testImage, expected: sampleImage)
12560
print("[Epoch: \(epoch)] Loss: \(sampleLoss)")
12661

127-
for i in 0 ..< Int(labels.shape[0]) / batchSize {
128-
let x = minibatch(in: images, at: i)
62+
for i in 0 ..< dataset.trainingSize / batchSize {
63+
let x = dataset.trainingImages.minibatch(at: i, batchSize: batchSize)
12964

13065
let 𝛁model = autoencoder.gradient { autoencoder -> Tensor<Float> in
13166
let image = autoencoder(x)
13267
return meanSquaredError(predicted: image, expected: x)
13368
}
13469

135-
optimizer.update(&autoencoder.allDifferentiableVariables, along: 𝛁model)
70+
optimizer.update(&autoencoder, along: 𝛁model)
13671
}
13772
}

CIFAR/Data.swift

Lines changed: 0 additions & 82 deletions
This file was deleted.

CIFAR/Helpers.swift

Lines changed: 0 additions & 51 deletions
This file was deleted.

CIFAR/README.md

Lines changed: 0 additions & 23 deletions
This file was deleted.

0 commit comments

Comments
 (0)