Skip to content

Fixed a bug in const folding, where an inst that's not of type SingleValueInstruction gets added to the worklist #18272

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 27, 2018

Conversation

mhong
Copy link

@mhong mhong commented Jul 27, 2018

This causes cast assert at https://github.com/apple/swift/blame/master/lib/SILOptimizer/Utils/ConstantFolding.cpp#L1585.

One such example inst is the following, which produces a SILValue of type
MultipleValueInstructionResult, so ValueBase::getDefiningInstruction() still
returns a valid inst for it, even though that graph_op inst is not a SingleValueInstruction.

%94 = graph_op "Fill,i,i"(%73 : $TensorHandle<Int32>, %85 : $TensorHandle<Float>) {T: $Float, index_type: $Int32, __device: "/device:CPU:0"} : $TensorHandle<Float> // users: %1581, %100, %98, %97, %99, %103, %106, %111, %113, %118, %755

SingleValueInstruction gets added to the worklist, causing cast assert at https://github.com/apple/swift/blame/master/lib/SILOptimizer/Utils/ConstantFolding.cpp#L1585.

One such example inst is the following, which produces a SILValue of type
MultipleValueInstructionResult, so ValueBase::getDefiningInstruction() still
returns a valid inst for it, even though that graph_op inst is not a SingleValueInstruction.

```
%94 = graph_op "Fill,i,i"(%73 : $TensorHandle<Int32>, %85 : $TensorHandle<Float>) {T: $Float, index_type: $Int32, __device: "/device:CPU:0"} : $TensorHandle<Float> // users: %1581, %100, %98, %97, %99, %103, %106, %111, %113, %118, %755
```
@mhong
Copy link
Author

mhong commented Jul 27, 2018

@swift-ci please test tensorflow

@mhong mhong requested review from devincoughlin, eeckstein and lattner and removed request for devincoughlin and eeckstein July 27, 2018 01:12
@mhong
Copy link
Author

mhong commented Jul 27, 2018

I can upstream this fix later if that makes sense.

@mhong
Copy link
Author

mhong commented Jul 27, 2018

I didn't have a simpler test case, but the fix is motivated by a crash in the AutoEncoder model, and I verified the fix with that model as follows.

/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/swift-linux-x86_64/bin/swiftc -Xllvm -tf-strict-deabstraction -O -L/usr/local/google/home/hongm/ssd_part/git/swift-base/build/bazel-bin/tensorflow -ltensorflow -ltensorflow_framework  -I/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/swift-linux-x86_64/lib/swift/linux  -L/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/swift-linux-x86_64/lib/swift/linux -O  -L/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/libdispatch-linux-x86_64/src/.libs -I/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/foundation-linux-x86_64/Foundation/usr//lib/swift -I/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/foundation-linux-x86_64/Foundation -L/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/foundation-linux-x86_64/Foundation -DDEPLOYMENT_RUNTIME_SWIFT -I/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/foundation-linux-x86_64/Foundation/usr/lib/swift  -I/usr/local/google/home/hongm/ssd_part/git/swift-base/swift-corelibs-libdispatch -I/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/libdispatch-linux-x86_64/src/swift -lFoundation -ldispatch /usr/local/google/home/hongm/ssd_part/git/swift-models/Autoencoder/Autoencoder.swift 

The compilation finishes. When I run the code as

LD_LIBRARY_PATH=/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/foundation-linux-x86_64/Foundation/:/usr/local/google/home/hongm/ssd_part/git/swift-base/build/Ninja-ReleaseAssert+stdlib-Release/libdispatch-linux-x86_64/src/.libs ./Autoencoder

I got:

Fatal error: 'try!' expression unexpectedly raised an error: exception: cannot import name _remove_dead_weakref: file /usr/local/google/home/hongm/ssd_part/git/swift-base/swift/stdlib/public/core/ErrorType.swift, line 185

This is probably due to my incorrect env, and I guess would not occur if we build and run this model with the proper S4TF toolchain.

@mhong
Copy link
Author

mhong commented Jul 27, 2018

@rxwei and @marcrasi as FYI

@mhong
Copy link
Author

mhong commented Jul 27, 2018

I built a toolchain, and compiled the model code with:

$ /usr/local/google/home/hongm/ssd_part/git/swift-base/swift/swift-nightly-install/usr/bin/swiftc  -Xllvm -tf-strict-deabstraction -O /usr/local/google/home/hongm/ssd_part/git/swift-models/Autoencoder/Autoencoder.swift

But the above runtime error still occurs. @rxwei, are you seeing this as well?

@mhong
Copy link
Author

mhong commented Jul 27, 2018

From google search, this error appears to come from python (e.g. MDAnalysis/mdanalysis#1739). Any ideas?

@rxwei
Copy link
Contributor

rxwei commented Jul 27, 2018

This should be a python error. On macOS, removing brewed python would solve the problem. Not sure about Linux though.

Copy link
Contributor

@lattner lattner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, please do upstream this, thanks!

@mhong
Copy link
Author

mhong commented Jul 27, 2018

@dan-zheng Dan, can you try if you can reproduce this python error on Linux? Thanks.

@mhong mhong merged commit 650ace4 into swiftlang:tensorflow Jul 27, 2018
@mhong mhong deleted the const_fold_no_multi_value_inst branch July 27, 2018 03:37
@dan-zheng
Copy link
Contributor

@mhong I wasn't able to reproduce the error on a fresh toolchain build (swift 37864e1, swift-models tensorflow/swift-models@40c239b).

I ran the following and it worked fine:

swift/swift-nightly-install/usr/bin/swiftc -O -Xllvm -tf-strict-deabstraction ~/swift-models/Autoencoder/Autoencoder.swift

exception: cannot import name _remove_dead_weakref is a common bug according to Google. I'm not sure what's causing it for you but usually it's Python environment related.

@mhong
Copy link
Author

mhong commented Jul 27, 2018

@dan-zheng, running swiftc also works for me -- the error occurs when running the compiled binary ./AutoEncoder. Can you confirm that it works for you you on Linux?

My error trace:

$ ./Autoencoder 
Reading the data.
Constructing the data tensors.
2018-07-26 22:36:50.024222: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-07-26 22:36:50.220735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1404] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:02:00.0
totalMemory: 7.92GiB freeMemory: 7.80GiB
2018-07-26 22:36:50.315098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1404] Found device 1 with properties: 
name: Quadro K1200 major: 5 minor: 0 memoryClockRate(GHz): 1.0325
pciBusID: 0000:03:00.0
totalMemory: 3.91GiB freeMemory: 2.59GiB
2018-07-26 22:36:50.315164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-26 22:36:50.315174: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-26 22:36:50.563228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-26 22:36:50.563258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-26 22:36:50.563264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-26 22:36:50.563268: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-26 22:36:50.563471: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
2018-07-26 22:36:50.563662: E tensorflow/core/common_runtime/gpu/gpu_device.cc:228] Illegal GPUOptions.experimental.num_dev_to_dev_copy_streams=0 set to 1 instead.
2018-07-26 22:36:50.666787: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-26 22:36:50.666823: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-26 22:36:50.666856: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-26 22:36:50.666863: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-26 22:36:50.666870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-26 22:36:50.666874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-26 22:36:50.666991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
2018-07-26 22:36:50.798780: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-26 22:36:50.798813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-26 22:36:50.798843: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-26 22:36:50.798850: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-26 22:36:50.798855: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-26 22:36:50.798860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-26 22:36:50.798980: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
Embedding loss:  0.39247057
Fatal error: 'try!' expression unexpectedly raised an error: exception: No module named matplotlib: file /usr/local/google/home/hongm/ssd_part/git/swift-base/swift/stdlib/public/core/ErrorType.swift, line 185
Current stack trace:
0    libswiftCore.so                    0x00007f9b97f11740 _swift_stdlib_reportFatalErrorInFile + 215
1    libswiftCore.so                    0x00007f9b97e4e1f1 <unavailable> + 3596785
2    libswiftCore.so                    0x00007f9b97c5b695 <unavailable> + 1554069
3    libswiftCore.so                    0x00007f9b97e4e07d <unavailable> + 3596413
4    libswiftCore.so                    0x00007f9b97c5a7cd <unavailable> + 1550285
5    libswiftCore.so                    0x00007f9b97e00c7f <unavailable> + 3279999
6    libswiftCore.so                    0x00007f9b97e59553 <unavailable> + 3642707
Illegal instruction

@rxwei
Copy link
Contributor

rxwei commented Jul 27, 2018

You need to install matplotlib.

pip2 install matplotlib

@dan-zheng
Copy link
Contributor

dan-zheng commented Jul 27, 2018

The error is exception: No module named matplotlib.
Could you try installing matplotlib via pip install matplotlib? If you don't have pip, here are installation instructions.

Incidentally, I did find another bug with the data loading logic (using CommandLine.arguments.first as the Swift script path is unrobust). I pushed a fix at tensorflow/swift-models#20.

@mhong
Copy link
Author

mhong commented Jul 27, 2018

The matplotlib error appears a red herring -- I got that only when I activate a certain conda env. Once I deactivate that, I'm getting back to the _remove_dead_weakref error:

$ pip install matplotlib
Requirement already satisfied: matplotlib in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (2.1.2)
Requirement already satisfied: numpy>=1.7.1 in /usr/local/google/home/hongm/.local/lib/python2.7/site-packages (from matplotlib) (1.14.3)
Requirement already satisfied: six>=1.10 in /usr/local/google/home/hongm/.local/lib/python2.7/site-packages (from matplotlib) (1.11.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/google/home/hongm/.local/lib/python2.7/site-packages (from matplotlib) (2.7.2)
Requirement already satisfied: backports.functools_lru_cache in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (from matplotlib) (1.4)
Requirement already satisfied: subprocess32 in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (from matplotlib) (3.2.7)
Requirement already satisfied: pytz in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (from matplotlib) (2017.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (from matplotlib) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/google/home/hongm/anaconda2/lib/python2.7/site-packages (from matplotlib) (2.2.0)
grin 1.2.1 requires argparse>=1.1, which is not installed.
tensorboard 1.7.0 has requirement bleach==1.5.0, but you'll have bleach 2.1.3 which is incompatible.
tensorboard 1.7.0 has requirement html5lib==0.9999999, but you'll have html5lib 1.0.1 which is incompatible.
tensorflow-tensorboard 1.5.1 has requirement bleach==1.5.0, but you'll have bleach 2.1.3 which is incompatible.
tensorflow-tensorboard 1.5.1 has requirement html5lib==0.9999999, but you'll have html5lib 1.0.1 which is incompatible.
You are using pip version 10.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
hongm@hongm:~/ssd_part/git/swift-models/Autoencoder$ ./Autoencoder 
Reading the data.
Constructing the data tensors.
2018-07-27 00:11:29.788851: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-07-27 00:11:30.038159: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1404] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:02:00.0
totalMemory: 7.92GiB freeMemory: 7.80GiB
2018-07-27 00:11:30.273406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1404] Found device 1 with properties: 
name: Quadro K1200 major: 5 minor: 0 memoryClockRate(GHz): 1.0325
pciBusID: 0000:03:00.0
totalMemory: 3.91GiB freeMemory: 2.59GiB
2018-07-27 00:11:30.287197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-27 00:11:30.287242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-27 00:11:30.763266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-27 00:11:30.763305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-27 00:11:30.763318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-27 00:11:30.763325: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-27 00:11:30.763582: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
2018-07-27 00:11:30.763892: E tensorflow/core/common_runtime/gpu/gpu_device.cc:228] Illegal GPUOptions.experimental.num_dev_to_dev_copy_streams=0 set to 1 instead.
2018-07-27 00:11:30.895722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-27 00:11:30.895775: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-27 00:11:30.895877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-27 00:11:30.895898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-27 00:11:30.895913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-27 00:11:30.895925: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-27 00:11:30.896093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
2018-07-27 00:11:31.559420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1468] Ignoring visible gpu device (device: 1, name: Quadro K1200, pci bus id: 0000:03:00.0, compute capability: 5.0) with Cuda multiprocessor count: 4. The minimum required count is 8. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.
2018-07-27 00:11:31.559468: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1483] Adding visible gpu devices: 0
2018-07-27 00:11:31.559520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:964] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-07-27 00:11:31.559538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:970]      0 1 
2018-07-27 00:11:31.559551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 0:   N N 
2018-07-27 00:11:31.559563: I tensorflow/core/common_runtime/gpu/gpu_device.cc:983] 1:   N N 
2018-07-27 00:11:31.559731: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7528 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1)
Embedding loss:  0.3907438
Fatal error: 'try!' expression unexpectedly raised an error: exception: cannot import name _remove_dead_weakref: file /usr/local/google/home/hongm/ssd_part/git/swift-base/swift/stdlib/public/core/ErrorType.swift, line 185I
Current stack trace:
0    libswiftCore.so                    0x00007ff8036e6740 _swift_stdlib_reportFatalErrorInFile + 215
1    libswiftCore.so                    0x00007ff8036231f1 <unavailable> + 3596785
2    libswiftCore.so                    0x00007ff803430695 <unavailable> + 1554069
3    libswiftCore.so                    0x00007ff80362307d <unavailable> + 3596413
4    libswiftCore.so                    0x00007ff80342f7cd <unavailable> + 1550285
5    libswiftCore.so                    0x00007ff8035d5c7f <unavailable> + 3279999
6    libswiftCore.so                    0x00007ff80362e553 <unavailable> + 3642707
Illegal instruction

mhong pushed a commit to mhong/swift that referenced this pull request Jul 27, 2018
SingleValueInstruction gets added to the worklist, causing cast assert at https://github.com/apple/swift/blame/master/lib/SILOptimizer/Utils/ConstantFolding.cpp#L1585.

One such example inst is the following (in the tensorflow branch), which produces a SILValue of type
MultipleValueInstructionResult, so ValueBase::getDefiningInstruction() still
returns a valid inst for it, even though that graph_op inst is not a SingleValueInstruction.

```
%94 = graph_op "Fill,i,i"(%73 : $TensorHandle<Int32>, %85 : $TensorHandle<Float>) {T: $Float, index_type: $Int32, __device: "/device:CPU:0"} : $TensorHandle<Float>
```

The same fix has been merged into the tensorflow branch: swiftlang#18272
mhong pushed a commit to mhong/swift that referenced this pull request Jul 31, 2018
MultipleValueInstructionResult, which got fixed in
swiftlang#18272.

Confirmed this test crashes without that fix.
mhong pushed a commit that referenced this pull request Aug 1, 2018
MultipleValueInstructionResult, which got fixed in
#18272.

Confirmed this test crashes without that fix.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants