Java client for RedisAI
<dependencies>
<dependency>
<groupId>com.redislabs</groupId>
<artifactId>jredisai</artifactId>
<version>0.9.0</version>
</dependency>
</dependencies>
<repositories>
<repository>
<id>snapshots-repo</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
</repository>
</repositories>
and
<dependencies>
<dependency>
<groupId>com.redislabs</groupId>
<artifactId>jredisai</artifactId>
<version>1.0.0-SNAPSHOT</version>
</dependency>
</dependencies>
RedisAI client = new RedisAI("localhost", 6379);
client.setModel("model", Backend.TF, Device.CPU, new String[] {"a", "b"}, new String[] {"mul"}, "graph.pb");
client.setTensor("a", new float[] {2, 3}, new int[]{2});
client.setTensor("b", new float[] {2, 3}, new int[]{2});
client.runModel("model", new String[] {"a", "b"}, new String[] {"c"});
Chunk size: Since version 0.10.0
, the chunk size of model (blob) is set to 512mb (536870912 bytes) based on
default Redis configuration. This behavior can be changed by redisai.blob.chunkSize
system property at the beginning
of the application. For example, chunk size can be limited to 8mb by setting -Dredisai.blob.chunkSize=8388608
or
System.setProperty(Model.BLOB_CHUNK_SIZE_PROPERTY, "8388608");
. A limit of 0 (zero) would disable chunking.
Socket timeout: Operations with large data and/or long processing time may require a higher socket timeout. Following constructor may come in handy for that purpose.
HostAndPort hostAndPort = new HostAndPort(host, port);
JedisClientConfig clientConfig = DefaultJedisClientConfig.builder().socketTimeoutMillis(largeTimeout).build();
new RedisAI(hostAndPort, clientConfig);