mirror of
https://github.com/ml-explore/mlx.git
synced 2024-09-15 10:04:00 +02:00
Readme (#2)
* readme wip * more readme * examples * spell * comments + nits
This commit is contained in:
parent
adb992a780
commit
d1926c4752
|
@ -11,7 +11,7 @@ possible.
|
|||
and after the change. Examples of benchmarks can be found in `benchmarks/python/`.
|
||||
4. If you've changed APIs, update the documentation.
|
||||
5. Every PR should have passing tests and at least one review.
|
||||
6. For code formating install `pre-commit` using something like `pip install pre-commit` and run `pre-commit install`.
|
||||
6. For code formatting install `pre-commit` using something like `pip install pre-commit` and run `pre-commit install`.
|
||||
This should install hooks for running `black` and `clang-format` to ensure
|
||||
consistent style for C++ and python code.
|
||||
|
||||
|
|
94
README.md
94
README.md
|
@ -1,39 +1,77 @@
|
|||
# MLX
|
||||
|
||||
MLX is an array framework for machine learning specifically targeting Apple
|
||||
Silicon. MLX is designed with inspiration from Jax, PyTorch, ArrayFire.
|
||||
[**Quickstart**](#quickstart) | [**Installation**](#installation) |
|
||||
[**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) |
|
||||
[**Examples**](#examples)
|
||||
|
||||
[Documentation](https://ml-explore.github.io/mlx/build/html/index.html)
|
||||
MLX is an array framework for machine learning on Apple silicon.
|
||||
|
||||
## Build
|
||||
Some key features of MLX include:
|
||||
|
||||
- **Familiar APIs**: MLX has a Python API which closely follows NumPy.
|
||||
MLX also has a fully featured C++ API which closely mirrors the Python API.
|
||||
MLX has higher level packages like `mlx.nn` and `mlx.optimizers` with APIs
|
||||
that closely follow PyTorch to simplify building more complex models.
|
||||
|
||||
- **Composable function transformations**: MLX has composable function
|
||||
transformations for automatic differentiation, automatic vectorization,
|
||||
and computation graph optimization.
|
||||
|
||||
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
|
||||
materialized when needed.
|
||||
|
||||
- **Dynamic graph construction**: Computation graphs in MLX are built
|
||||
dynamically. Changing the shapes of function arguments does not trigger
|
||||
slow compilations, and debugging is simple and intuitive.
|
||||
|
||||
- **Multi-device**: Operations can run on any of the supported devices
|
||||
(currently the CPU and GPU).
|
||||
|
||||
- **Unified memory**: A noteable difference from MLX and other frameworks
|
||||
is the *unified memory model*. Arrays in MLX live in shared memory.
|
||||
Operations on MLX arrays can be performed on any of the supported
|
||||
device types without moving data.
|
||||
|
||||
MLX is designed by machine learning researchers for machine learning
|
||||
researchers. The framework is intended to be user friendly, but still efficient
|
||||
to train and deploy models. The design of the framework itself is also
|
||||
conceptually simple. We intend to make it easy for researchers to extend and
|
||||
improve MLX with the goal of quickly exploring new ideas.
|
||||
|
||||
The design of MLX is inspired by frameworks like
|
||||
[NumPy](https://numpy.org/doc/stable/index.html),
|
||||
[PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and
|
||||
[ArrayFire](https://arrayfire.org/).
|
||||
|
||||
## Examples
|
||||
|
||||
The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a
|
||||
variety of examples including:
|
||||
|
||||
- [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training.
|
||||
- Large scale text generation with
|
||||
[LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llama) and
|
||||
finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora).
|
||||
- Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion).
|
||||
- Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper).
|
||||
|
||||
## Quickstart
|
||||
|
||||
See the [quick start
|
||||
guide](https://pages.github.pie.apple.com/ml-explore/framework002/build/html/quick_start.html)
|
||||
in the documentation.
|
||||
|
||||
## Installation
|
||||
|
||||
MLX is available on [PyPi](https://pypi.org/project/mlx/). To install the Python API run:
|
||||
|
||||
```
|
||||
mkdir -p build && cd build
|
||||
cmake .. && make -j
|
||||
```
|
||||
|
||||
Run the C++ tests with `make test` (or `./tests/tests` for more detailed output).
|
||||
|
||||
### Python bidings
|
||||
|
||||
To install run:
|
||||
|
||||
`
|
||||
env CMAKE_BUILD_PARALLEL_LEVEL="" pip install .
|
||||
`
|
||||
|
||||
For developing use an editable install:
|
||||
|
||||
```
|
||||
env CMAKE_BUILD_PARALLEL_LEVEL="" pip install -e .
|
||||
```
|
||||
|
||||
To make sure the install is working run the tests with:
|
||||
|
||||
```
|
||||
python -m unittest discover python/tests
|
||||
pip install mlx
|
||||
```
|
||||
|
||||
Checkout the
|
||||
[documentation](https://ml-explore.github.io/mlx/build/html/install.html#)
|
||||
for more information on building the C++ and Python APIs from source.
|
||||
|
||||
## Contributing
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ MLX with your own Apple silicon computer is
|
|||
|
||||
.. code-block:: shell
|
||||
|
||||
pip install apple-mlx -i https://pypi.apple.com/simple
|
||||
pip install mlx
|
||||
|
||||
Build from source
|
||||
-----------------
|
||||
|
@ -46,6 +46,17 @@ Then simply build and install it using pip:
|
|||
|
||||
env CMAKE_BUILD_PARALLEL_LEVEL="" pip install .
|
||||
|
||||
For developing use an editable install:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
env CMAKE_BUILD_PARALLEL_LEVEL="" pip install -e .
|
||||
|
||||
To make sure the install is working run the tests with:
|
||||
|
||||
.. code-block:: shell
|
||||
|
||||
python -m unittest discover python/tests
|
||||
|
||||
C++ API
|
||||
^^^^^^^
|
||||
|
|
|
@ -13,7 +13,7 @@ The main differences between MLX and NumPy are:
|
|||
and computation graph optimization.
|
||||
- **Lazy computation**: Computations in MLX are lazy. Arrays are only
|
||||
materialized when needed.
|
||||
- **Multi-device**: Operations can run on any of the suppoorted devices (CPU,
|
||||
- **Multi-device**: Operations can run on any of the supported devices (CPU,
|
||||
GPU, ...)
|
||||
|
||||
The design of MLX is strongly inspired by frameworks like `PyTorch
|
||||
|
|
Loading…
Reference in a new issue