Skip to content
This repository was archived by the owner on Apr 23, 2025. It is now read-only.

Tensorflow 0.5 #210

Merged
merged 32 commits into from
Sep 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
67f7c64
Use PascalCase for all Swift source files (#152)
kirillbobyrev May 5, 2019
35c20e8
Load config variable from `hparams.json` file so that Transformer can…
leoxzhao May 9, 2019
ad63e2b
Style fixes from #154 (#155)
saeta May 9, 2019
030153c
A few quick fixes to help unbreak swift-models. (#160)
saeta May 20, 2019
f9687bf
Unbreak models by inserting `.call`. (#161)
saeta May 20, 2019
4275c36
s/CotangentVector/TangentVector/g (#162)
brettkoonce May 24, 2019
9de3862
Add MNIST test set evaluation and change hyperparameters. (#163)
kamalkraj May 28, 2019
ed143c0
Updated models to use revised name for callable (SE-0253), fixed a fe…
leoxzhao Jun 8, 2019
cf21585
transformer: upstream api changes (#171)
brettkoonce Jun 19, 2019
edd734e
rebuild resnet block based approach (#170)
brettkoonce Jun 20, 2019
b9920ae
add gym blackjack qlearning demo (#173)
brettkoonce Jun 24, 2019
e3b8a6b
Make 'GoModel' stored properties be variables. (#174)
rxwei Jun 26, 2019
2fa11ba
Added .swift-format and updated .gitignore for Xcode 11's SwiftPM sup…
BradLarson Jul 19, 2019
08c80a5
Replaced Python with Swift in CIFAR10 dataset loading (#178)
BradLarson Jul 23, 2019
d4ccedc
Replace Autoencoder's tanh activation with sigmoid (#180)
t-ae Jul 24, 2019
d6f4496
Add GAN Example (#181)
t-ae Jul 26, 2019
7fb9185
nightlies URL: s4tf-kokoro-artifact-testing => swift-tensorflow-artif…
pschuh Jul 29, 2019
c79f62e
First steps in repository reorganization: extracting common MNIST dat…
BradLarson Jul 30, 2019
bf8f2fa
Update Dockerfile to install Python libraries (#184)
t-ae Jul 31, 2019
d34fc3c
Delete helpers.swift (#187)
mikowals Aug 2, 2019
b353333
Continuing repository reorganization: extracting common CIFAR-10 and …
BradLarson Aug 2, 2019
4d75df9
[Models] Fix enumeration of blocks in `ResidualBasicBlockStack` `init…
jon-tow Aug 7, 2019
66d442d
SqueezeNet Implementation (#189)
Andr0id100 Aug 9, 2019
e319a07
WideResNet - fix widenFactor and match model to citation (#193)
mikowals Aug 13, 2019
36fdbb1
Removal of deprecated allDifferentiableVariables. (#194)
BradLarson Aug 13, 2019
3fa05f7
Add JPEG loading / saving via Raw TF operations, removing Matplotlib …
BradLarson Aug 21, 2019
036f014
Convert MNIST classifier to use sequential (#200)
Shashi456 Aug 27, 2019
21c694d
Convert Autoencoder and Catch to use Sequential (#203)
Shashi456 Aug 28, 2019
523933f
Adding shape-based inference tests for all image classification model…
BradLarson Aug 29, 2019
c8e6347
Update deprecated APIs in Catch (#205)
Shashi456 Aug 29, 2019
9150822
Update squeezenet and add V1.1 version (#204)
Shashi456 Aug 30, 2019
adffc52
Fix '@noDerivative' warnings. (#208) (#209)
rxwei Sep 20, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,7 @@
*.xcodeproj
*.png
.DS_Store
.swiftpm
cifar-10-batches-py/
cifar-10-batches-bin/
output/
14 changes: 14 additions & 0 deletions .swift-format
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"version": 1,
"lineLength": 100,
"indentation": {
"spaces": 4
},
"maximumBlankLines": 1,
"respectsExistingLineBreaks": true,
"blankLineBetweenMembers": {
"ignoreSingleLineProperties": true
},
"lineBreakBeforeControlFlowKeywords": false,
"lineBreakBeforeEachArgument": false
}
12 changes: 3 additions & 9 deletions Autoencoder/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Simple Autoencoder

This is an example of a simple 1-dimensional autoencoder model, using MNIST as a training dataset. It should produce output similar to the following:

### Epoch 1
<p align="center">
<img src="images/epoch-1-input.png" height="270" width="360">
Expand All @@ -12,7 +14,6 @@
<img src="images/epoch-10-output.png" height="270" width="360">
</p>

This directory builds a simple 1-dimensional autoencoder model.

## Setup

Expand All @@ -23,12 +24,5 @@ installed. Make sure you've added the correct version of `swift` to your path.
To train the model, run:

```
swift run Autoencoder
```
If you using brew to install python2 and modules, change the path:
- remove brew path '/usr/local/bin'
- add TensorFlow swift Toolchain /Library/Developer/Toolchains/swift-latest/usr/bin

swift run -c release Autoencoder
```
export PATH=/Library/Developer/Toolchains/swift-latest/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:"${PATH}"
```
131 changes: 33 additions & 98 deletions Autoencoder/main.swift
Original file line number Diff line number Diff line change
Expand Up @@ -12,126 +12,61 @@
// See the License for the specific language governing permissions and
// limitations under the License.

import Datasets
import Foundation
import ModelSupport
import TensorFlow
import Python

// Import Python modules
let matplotlib = Python.import("matplotlib")
let np = Python.import("numpy")
let plt = Python.import("matplotlib.pyplot")

// Turn off using display on server / linux
matplotlib.use("Agg")

// Some globals
let epochCount = 10
let batchSize = 100
let outputFolder = "./output/"
let imageHeight = 28, imageWidth = 28

func plot(image: [Float], name: String) {
// Create figure
let ax = plt.gca()
let array = np.array([image])
let pixels = array.reshape([imageHeight, imageWidth])
if !FileManager.default.fileExists(atPath: outputFolder) {
try! FileManager.default.createDirectory(atPath: outputFolder,
withIntermediateDirectories: false,
attributes: nil)
}
ax.imshow(pixels, cmap: "gray")
plt.savefig("\(outputFolder)\(name).png", dpi: 300)
plt.close()
}
let imageHeight = 28
let imageWidth = 28

/// Reads a file into an array of bytes.
func readFile(_ filename: String) -> [UInt8] {
let possibleFolders = [".", "Resources", "Autoencoder/Resources"]
for folder in possibleFolders {
let parent = URL(fileURLWithPath: folder)
let filePath = parent.appendingPathComponent(filename).path
guard FileManager.default.fileExists(atPath: filePath) else {
continue
}
let d = Python.open(filePath, "rb").read()
return Array(numpy: np.frombuffer(d, dtype: np.uint8))!
}
print("Failed to find file with name \(filename) in the following folders: \(possibleFolders).")
exit(-1)
}

/// Reads MNIST images and labels from specified file paths.
func readMNIST(imagesFile: String, labelsFile: String) -> (images: Tensor<Float>,
labels: Tensor<Int32>) {
print("Reading data.")
let images = readFile(imagesFile).dropFirst(16).map { Float($0) }
let labels = readFile(labelsFile).dropFirst(8).map { Int32($0) }
let rowCount = labels.count

print("Constructing data tensors.")
return (
images: Tensor(shape: [rowCount, imageHeight * imageWidth], scalars: images) / 255.0,
labels: Tensor(labels)
)
}

/// An autoencoder.
struct Autoencoder: Layer {
typealias Input = Tensor<Float>
typealias Output = Tensor<Float>

var encoder1 = Dense<Float>(inputSize: imageHeight * imageWidth, outputSize: 128,
activation: relu)
var encoder2 = Dense<Float>(inputSize: 128, outputSize: 64, activation: relu)
var encoder3 = Dense<Float>(inputSize: 64, outputSize: 12, activation: relu)
var encoder4 = Dense<Float>(inputSize: 12, outputSize: 3, activation: relu)

var decoder1 = Dense<Float>(inputSize: 3, outputSize: 12, activation: relu)
var decoder2 = Dense<Float>(inputSize: 12, outputSize: 64, activation: relu)
var decoder3 = Dense<Float>(inputSize: 64, outputSize: 128, activation: relu)
var decoder4 = Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth,
activation: tanh)

@differentiable
func call(_ input: Input) -> Output {
let encoder = input.sequenced(through: encoder1, encoder2, encoder3, encoder4)
return encoder.sequenced(through: decoder1, decoder2, decoder3, decoder4)
}
}

// MNIST data logic
func minibatch<Scalar>(in x: Tensor<Scalar>, at index: Int) -> Tensor<Scalar> {
let start = index * batchSize
return x[start..<start+batchSize]
let outputFolder = "./output/"
let dataset = MNIST(batchSize: batchSize, flattening: true)
// An autoencoder.
var autoencoder = Sequential {
// The encoder.
Dense<Float>(inputSize: imageHeight * imageWidth, outputSize: 128, activation: relu)
Dense<Float>(inputSize: 128, outputSize: 64, activation: relu)
Dense<Float>(inputSize: 64, outputSize: 12, activation: relu)
Dense<Float>(inputSize: 12, outputSize: 3, activation: relu)
// The decoder.
Dense<Float>(inputSize: 3, outputSize: 12, activation: relu)
Dense<Float>(inputSize: 12, outputSize: 64, activation: relu)
Dense<Float>(inputSize: 64, outputSize: 128, activation: relu)
Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth, activation: tanh)
}

let (images, numericLabels) = readMNIST(imagesFile: "train-images-idx3-ubyte",
labelsFile: "train-labels-idx1-ubyte")
let labels = Tensor<Float>(oneHotAtIndices: numericLabels, depth: 10)

var autoencoder = Autoencoder()
let optimizer = RMSProp(for: autoencoder)

// Training loop
for epoch in 1...epochCount {
let sampleImage = Tensor(shape: [1, imageHeight * imageWidth], scalars: images[epoch].scalars)
let sampleImage = Tensor(
shape: [1, imageHeight * imageWidth], scalars: dataset.trainingImages[epoch].scalars)
let testImage = autoencoder(sampleImage)

plot(image: sampleImage.scalars, name: "epoch-\(epoch)-input")
plot(image: testImage.scalars, name: "epoch-\(epoch)-output")
do {
try saveImage(
sampleImage, size: (imageWidth, imageHeight), directory: outputFolder,
name: "epoch-\(epoch)-input")
try saveImage(
testImage, size: (imageWidth, imageHeight), directory: outputFolder,
name: "epoch-\(epoch)-output")
} catch {
print("Could not save image with error: \(error)")
}

let sampleLoss = meanSquaredError(predicted: testImage, expected: sampleImage)
print("[Epoch: \(epoch)] Loss: \(sampleLoss)")

for i in 0 ..< Int(labels.shape[0]) / batchSize {
let x = minibatch(in: images, at: i)
for i in 0 ..< dataset.trainingSize / batchSize {
let x = dataset.trainingImages.minibatch(at: i, batchSize: batchSize)

let 𝛁model = autoencoder.gradient { autoencoder -> Tensor<Float> in
let image = autoencoder(x)
return meanSquaredError(predicted: image, expected: x)
}

optimizer.update(&autoencoder.allDifferentiableVariables, along: 𝛁model)
optimizer.update(&autoencoder, along: 𝛁model)
}
}
82 changes: 0 additions & 82 deletions CIFAR/Data.swift

This file was deleted.

51 changes: 0 additions & 51 deletions CIFAR/Helpers.swift

This file was deleted.

23 changes: 0 additions & 23 deletions CIFAR/README.md

This file was deleted.

Loading