Tags
List of the tags that can be inserted in the text
Voice Tags
\pau=number\ | This tag inserts a pause of the specified number of milliseconds in the speech. |
\prx="phonetic"\ | With this tag, a user can synthesize a specific pronunciation inside a text. The phonetic string is composed of phonemes followed by space characters (phonetic alphabet is language-dependent). This tag is only suitable for inserting single words into the text. Unpredictable errors (mainly prosody) can occur when inserting greater units. Example: I will say: \prx=h e l @U1\. is equivalent to "I will say: hello." |
\rms=number\ | Sets the reading mode to spelling out< each letter of each word (number is 1), or turns it off (number is 0). |
\rmw=number\ | Sets the reading mode to leaving audible pauses between each word (number is 1), or turns it off (number is 0). |
\spd=number\ | This tag sets the baseline average talking speed of the voice to the specified number of words per minute. Each voice has a default speed (about 180 words per minute, depending of the voice). Call to reset to the default speed. Range: 1/3 to 3 times the default speed (typically \spd=60\ to . This tag persists on the next texts being spoken until the next voice change or the next tag cancelling it out. |
\rspd=number\ | This tag sets the relative speed. 100 is the default speed (about 180 words per minute, depending of the voice). Call \rst\ to reset to the default speed. This tag persists on the next texts being spoken until the next voice change or the next tag cancelling it out. |
\vct=number\ | This tag controls the shaping of the voice, please refer to the Developer’s Guide for more information (min: 50%-150%). This tag persists on the next texts being spoken until the next voice change or the next tag cancelling it out. |
\vol=number\ | This tag sets the output volume. It is formatted as \ where volume is a value in the range 0 to 65535, inclusive. The default value is 65535. This tag persists on the next texts being spoken until the next voice change or the next tag cancelling it out. |
\audioboost=val\ | The Audio Boost has effect on 2 aspects of the speech, it improves the speech clarity by emphasizing medium and high frequencies, that are important for intelligibility, and it increases the perceived level of the speech with no saturation effect. This tag is formatted as \audioboost=val\ where val is a value in the range 0 to 90, inclusive. The default value is 0. The parameter val controls the emphasis of medium and high frequencies, from no emphasis (0), to maximum emphasis (90). |
\rst\ | This tag resets the engine to the default settings for the current mode. |
\vce=key=value\ | Change the speaking voice according to the specified characteristics. The pitch, speed, volume, etc. revert to the defaults for the new voice - \vce=speaker=Ryan\ Specifies the speaker value of the voice. Beware that the speaker name is different from the voice name For Ryan22k_HQ name speaker name is Ryan Example: \vce=speaker=Ryan\ |
\sel=altN\ | Alternative synthesis for the following word. \sel=altN\ gives the N-th acoustic alternative for the following word (and thus, potentially, the whole breath group) Example: The sentence, I don't like the sound of this \sel=alt3\ word, takes the third best alternative of the pronunciation of word 'word'. |
Audio Tags
First you need to upload audio files to your account using the account webpage or using the api/storage
Only raw PCM 16-bit mono files with the same frequency as the voice used are supported by the audio tags (file extension must be .raw)
But you can import a mp3 or wav file which will be automatically converted to the right format/extension.
\audio=play="filename.raw"\ | Plays a sound in the foreground (synchronous mode). Example: \audio=play="filename.raw"\ I play a sound then I speak |
\audio=mix="filename.raw"\ | Plays the file in the background, the speech synthesis will continue during the playing. Example: \audio=mix="filename.raw"\ I play the sound while I speak |
\audio=offset=x\ | Skips x milliseconds at the beginning of the sound. Example: \audio=play="filename.raw";offset=5000\ I play the sound from position 5000ms then I speak |
mix mode only \audio=pause\ \audio=resume\ \audio=stop\ \audio=play\ |
pause / resume / stop / play Example: \audio=mix="filname.raw"\ I play the musis while I speak , then I put the background music on pause! \audio=pause\ Then I resume it! \audio=resume\ Finally, I stop it! \audio=stop\ finished. |
continue | Makes a sound continue in the background (asynchronous mode). There must be a duration=timeduration or until=timeposition argument. It turns the foreground playing into background playing Example: Please applaud! \audio=play="bravo.raw";duration=100;continue\ Thank you!. Play sound for 100 milliseconds then continue playing it in the background while saying “Thank you!”. |
duration=timeduration | Plays the sound until the position timeposition milliseconds within the sound is reached (play or mix
commands) and then stops reading it. Example: \audio=play="mozart.raw";until=5000\ Play music 5 secs then speak |
until=timeposition | Plays the sound until the position timeposition milliseconds within the sound is reached (play or mix
commands) and then stops reading it. Example: \audio=play="mozart.raw";duration=2000\ Play song two seconds then speak and play again music for three seconds.\audio=play="mozart.raw";offset=2000;until=5000\ |
\audio=repeat=status\ | When status is on, continuously repeats the foreground or background sound (play or mix command). When
it is off, does not repeat it (anymore). If status is a positive integral number, possibly zero, it will be the number of times to play the sound from the beginning as soon as its end is reached in addition to the regular play of the sound. In the end, the sound is player times + 1. Example: \audio=mix="bravo.raw";repeat=on\I speak while they applaud!\audio=play;repeat=off\ |
\audio=volume=percentage\ |
Sets the volume of the sound to percentage % of its base level. 100 is the base level. Lesser than 100 reduces the volume, greather than 100 raises it (which can lead to distortion and saturation). Example: \audio=play="bravo.raw";volume=100;\ I play the sound file at 100% volume. To fade the volume in or out you need to do add a pause tag after Example: \audio=volume=80\ \pau=1000\ \audio=volume=90\ \pau=1000\ \audio=volume=100\ \pau=1000\ \audio=volume=90\ \pau=1000\ \audio=volume=80\ \pau=1000\ \audio=volume=70\ \pau=1000\ \audio=volume=60\ \pau=1000\ |
Api
Parameters
Field | Type | Description |
String | your email | |
password | String | your password |
Responses
Value | Description | Response |
200 | Login success | {token : string} The token has no expiration date (deleted is you call the logout function) |
401 | Login failure |
{'error':'User account not active.'}
{'error':'Unable to login with provided credentials.'} |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Description |
none |
Responses
Value | Description | Response |
200 | Logout success | {'success':'User logged out.'} |
401 | Invalid token | {'detail':'Invalid token.'} |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Description |
None |
Responses
Value | Description | Response |
200 | Account information : email / address / voices (name,gender, language,locale) / credit ... | Json string |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Description |
first_name | String | Your first name |
last_name | String | Your last name |
address | String | Your address |
company | String | Your company |
country | String | Your country (must match one of the values return by /api/coutnry) |
Responses
Value | Description | Response |
200 | Account information (email/address/voices/credit ...) | Json string |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Value - Description |
voice | String | Alice22_HQ - voice name (voice list returned in your account info) |
text | String | Text to speak Warning : With the GET method text is limited to 2048 characters |
output | String | stream (default) - return the audio samples by chunks (start speaking faster)
file - return the audio file content when all is generated or a zip file if events are set to on : audio + events (words + visemes) events - return a json file only with the events (words / visemes) |
type | String | Audio type
Values : "mp3" (default) / "ogg" / "wav" / "flac" / "ac3" / "asf" / "wma" / "opus" / "aac" "aiff" / "webm" / "mka" / "s16le" / "alaw" / "mulaw" |
wordpos | String | Get word positions (json file)
Values : "on" / "off" (default) |
mouthpos | String | Get mouth positions (json file)
Values : "on" / "off" (default) |
speed | Int | Voice speed
Values : 30 to 300 (default 100) |
shaping | Int | Voice Shaping
Values : 50 to 150 (default 100) |
volume | Int | Voice volume
Values : 50 to 65535 (default 32768) |
samplerate | Int | Audio sample rate in Hz
Values : 8000 / 11025 / 12000 / 16000 / 22050 / 24000 / 32000 / 44100 / 48000 (default 22050) Note : Opus codec only supports 48000 |
bitrate | Int | Audio birate in kbps (mp3/ogg/ac3/asf/wma/opus)
Possible values : 24 / 32 / 40 / 48 / 56 / 64 / 80 / 96 / 112 / 128 / 144 / 160 / 192 / 224 / 240 / 256 / 320 Default and supported values depend on the codec and the samplerate |
dico | String | TTS Dictionary
Value : Dictionary name. File extension must be .dic Multiple dictionaries must be separated by a comma (e.g dico=dico1.dic,dico2.dic) If several dictionaries are loaded and contain a same word/entry, the last loaded dictionary has the priority |
application | String | Application name/id
Value : Application name If you use multiple applications on a same account and need to have statistics for each application |
Responses
Value | Description | Response |
200 | TTS generation success | Audio chunks (stream) Audio file content (storage with no position file) Zip file content (storage with position file) |
403 | Not logged | {'error':'You must be logged.'} |
400 | Not enough credit | {'error':'Not enough credit.'} |
400 | Incorrect or not allowed voice | {'error':'Voice not allowed.'} |
400 | Incorrect parameter (value) | {'error':'Invalid parameter or parameter value.'} |
401 | Invalid token | {'detail':'Invalid token.'} |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Description | Value |
type | String | voice : voice usage statisitics command : command usage statisitics credit : credit usage statisitics billing : billing based on credit usage and price per hour purchase : online payments history (standalone account) |
interval (optional) | String | day : statisitics cumulated day by day month : statisitics cumulated month by month year : statisitics cumulated year by year |
option (optional) | String | application : statisitics by application name |
Responses
Value | Description | Response |
---|---|---|
200 | Statistic informaton | Json string |
400 | Not logged | {'error':'You must be logged.'} |
401 | Invalid token | {'detail':'Invalid token.'} |
400 | Field type missing | {'error':'Type field missing.'} |
400 | Invalid type | {'error':'Invalid type.'} |
400 | Invalid interval | {'error':'Invalid interval.'} |
401 | Login failure |
{'error':'User account not active.'}
{'error':'Unable to stats with provided credentials.'} |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Value | Description | Response |
password | String | your new password |
Responses
Value | Description | Response |
200 | Password changed | {'success':'Password changed.'} |
401 | Invalid token | {'detail':'Invalid token.'} |
400 | Error | {'error':'Error description'} |
Samples
Parameters
Field | Type | Description |
String | your account email |
Responses
Value | Description | Response |
200 | Password reset email sent | {'success':'Reset password email sent. Please check your inbox.'} |
400 | Invalid email | {'error':'Invalid email.'} |
400 | Error | {'error':'Error description'} |
Samples
For dictionary file, extension must be .dic
For audio file, only raw PCM 16-bit mono files with the same frequency as the voice used are supported by the audio tags (file extension must be .raw)
You can also import a mp3 or wav file which will be automatically converted to right format.
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Header
Field | Type | Description |
Content-type | String | multipart/form-data |
Content-Disposition | String | 'attachment; filename=xxxxxx.xxx' |
Parameters
Field | Type | Description |
file | String | File path to upload |
Responses
Value | Description | Response |
200 | File uplaoded | {"success":"File uploaded."} |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Description |
None |
Responses
Value | Description | Response |
200 | List of files stored in the account folder | Json string |
400 | Error | {'error':'Error description'} |
Samples
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Header
Value | Description | Response |
Content-type | String | application/json |
Parameters
Field | Type | Description |
file | String | Filename to delete |
Responses
Value | Description | Response |
200 | File deleted | {'success':'File deleted.'} |
400 | Error | {'error':'Error description'} |
Samples
GET Get the Terms of Service content
Parameters
Field | Type | Value - Description |
none |
Responses
Value | Description | Response |
200 | Terms of Service text | {'raw': '...', 'html : '...'} |
400 | Error | {'error':'Error description'} |
POST Accept or reject the Terms of Service
Authorization header
Field | Type | Description |
token | String | Token returned by the login function 'Authorization': 'Token ' + token |
Parameters
Field | Type | Value - Description |
answer | String | yes / no |
Responses
Value | Description | Response |
200 | Terms of Service accepted | {'success':'Terms of usage accepted. You can now use the service'} |
200 | Terms of Service rejected | {'success':'You didn't accept the Terms of Usage. You cannot use the service.'} |
403 | Not logged | {'error':'You must be logged.'} |
401 | Invalid token | {'detail':'Invalid token.'} |
400 | Error | {'error':'Error description'} |
Samples
Events
Word and viseme positions
If you need to highlight the text while speaking or display a speaking avatar you can retrieve the word and viseme positions for a given text
To do that you need to set to on the wordpos and/or mouthpos parameters in the /api/command/ request
You'll get a json file with all the TTS events inclucing word and/or viseme positions
Audio file and events json file (in a zip file)
/api/command/?voice=Sharon22k_HQ&text=Hello&output=file&wordpos=on&mouthpos=on&token=xxxxxxxxxx
Only the events in a json file
/api/command/?voice=Sharon22k_HQ&text=Hello&output=events&wordpos=on&mouthpos=on&token=xxxxxxxxxx
{ "EventKind": "Word", "ID": 5, "SamplePosition": 10006, "Time": { "val": 0.453786 }, "Flags": "TTS_Word", "WordOffset": 6, "WordSize": 3, "Word": "how" }, { "EventKind": "Phoneme", "ID": 5, "SamplePosition": 10006, "Time": { "val": 0.453786 }, "Flags": "TTS_Word", "Duration": 59, "Mouth": { "bMouthHeight": 48, "bMouthWidth": 64, "bMouthUpturn": 128, "bJawOpen": 16, "bTeethUpperVisible": 16, "bTeethLowerVisible": 16, "bTonguePosn": 48, "bLipTension": 0 }, "Viseme": 12 },
"EventKind": "Word"
SamplePosition Word position in samples "Time" / "val" Word position in seconds WordOffset Word position in bytes WordSize Word size in bytes"EventKind": "Phoneme"
Viseme : The viseme code based on the disney list SVP_0 = 0 'silence SVP_1 = 1 'ae ax ah SVP_2 = 2 'aa SVP_3 = 3 'ao SVP_4 = 4 'ey eh uh SVP_5 = 5 'er SVP_6 = 6 'y iy ih ix SVP_7 = 7 'w uw SVP_8 = 8 'ow SVP_9 = 9 'aw SVP_10 = 10 'oy SVP_11 = 11 'ay SVP_12 = 12 'h SVP_13 = 13 'r SVP_14 = 14 'l SVP_15 = 15 's z SVP_16 = 16 'sh ch jh zh SVP_17 = 17 'th dh SVP_18 = 18 'f v SVP_19 = 19 'd t n SVP_20 = 20 'k g ng SVP_21 = 21 'p b m SamplePosition Word position in samples "Time" / "val" Word position in secondsExample in python
# Get word positions words = [] positions = [] offsets = [] sizes = [] json_file = open("events.json", "r") contents = json_file.read() json_file.close() try: json_content = json.loads(contents) for key, value in json_content.items(): if key == "Event": for item in value: for key, value in item.items(): if key =="EventKind": if value == "Word": print(" - word : " + str(item['Word'])) print("\t TimeVal " + str(item["Time"]["val"])) print("\t SamplePosition " + str(item["SamplePosition"])) print("\t WordOffset " + str(item["WordOffset"])) print("\t WordSize " + str(item["WordSize"])) # Play audio and display words events player = vlc.MediaPlayer("audio.mp3") player.play() Ended = 6 current_state = player.get_state() word_index = 0 while current_state != Ended: current_audio_pos = player.get_time() #print (current_audio_pos) if word_index < len(words): if positions[word_index] * 1000 <= current_audio_pos+100 and positions[word_index] * 1000 <= current_audio_pos-100 and current_audio_pos !=0: print("\t - " + words[word_index] + ' - current_audio_pos : ' + str(current_audio_pos) + " - positions[word_index] : " + str(positions[word_index]* 1000) ) word_index = word_index + 1 current_state = player.get_state()
Guides
Start using Acapela Cloud Editor
When you first start to use the editor, you only have one default project.
A project allows to have a specific set of text files, select the voice and specific settings to use.
You can either use the default project as it, rename it or create other projects.
![]() |
To start using the editor:
- either type a text in the editor box or click on 'import file(s)' button to import text or excel sheet files (see import chapter for the format).
- select the voice to use
- select the output type and samplerate
- listen by clicking on the play button (this won't use credits)
- tune your text if needed using tags (see interface chapter)
- save your modifications
- optionnaly generate the audio file
![]() |
When you are satisfied with how a text sounds and you saved it, you can flag it as validated by clicking on the status column in the text list.
![]() |
Finally you can generate a single file by right clicking on it in the text list or generate several audio files in a row by clicking on the 'Generate audio files' button.
![]() |
You will be prompted to select the files to generate
![]() |
Wait until the process is complete
When done a pop-up will appear to download a zipped file with all the audio files
![]() |
How to import text, excel, sound and dictionary files
Input files
To import texts in the editor, click on the 'import file(s)' button.
You can select multiple text files at once, or an excell file that will be parsed to generate one text file per line.
If you select an excel file it will create a text file from each line of the excel file.
The first column of the line will used as filename (the ".txt" extension will be added automatically).
The second column of the line will used as content of the file.
You can have mulitple worksheet tabs
Warning : any text file with the same name already present will be overwritten without prompting for a confirmation.
![]() |
When done the text files will appear in the text list.
![]() |
Sound files
To import audio/sound files to be used with audio tags click on the sounds tab on the right then on the 'import' button.
Only raw PCM 16-bit mono files with the same frequency as the voice used are supported by the audio tags (file extension must be .raw)
You can also import a mp3 or wav file which will be automatically converted to the right format.
![]() |
Once imported you can use the sound file in your audio tags by clicking on audio mix or play.
![]() |
Dictionary files
To import dictionary files
- go to your account page, storage, and and click on import button. Once uploaded the dictionaries will appear and can be used. Dictionary file extension must be .dic
- go to dictionary menu, create your dictionary and export it
If several dictionaries are loaded and contain a same word/entry, the last loaded dictionary has the priority
![]() |
Once imported you can use the dictionary file by selecting it in the combo list.
Description of the interface, menus, settings, text tags ...
Add a text tag
Tags are a form special markup, which can be inserted in the text in order to:
- improve pronunciation (e.g. pauses, phonetic transcriptions, reading mode and alternative selection tags)
- enhance audio (e.g. special sounds or background music)
- change the configuration of the speech engine during the synthesis (e.g. voice, speech rate, volume).
These tags can either be directly typed into the text editor (see tags doc) or be inserted into text from the tag list.
The tag list is divided in four tabs :
- settings : tags modifying the voice settings.
- pronunciation : tags modifying the pronunciation
- sound : play or mix audio sound using the sound files you imported
- user : personalised tags created by user
![]() |
To insert a tag from the list, simply place the cursor in the text where you want to insert the tag and click on the tag in the tag list on the right.
Depending on tag, either will it be inserted directly in the text or a pop-up window will appear letting you choose the setting value.
Validate and the tag will be inserted in the text.
For some tags you can also select a part of the text and a closing tag will be added with the reset to default value.
![]() |
![]() |
![]() |
User tag
User tags are very useful when you need a succession of tags you often use.
To create user tags, select a portion of text in the Editor you want to save as a user tag then right click.
In contextual menu choose 'save as user tag' menu item and follow the steps.
The user tag will appear in the 'User' tab in the tag list on the right.
Simply click on it to insert it again in a text.
![]() |
Alternative inflection (\sel tag)
Sometimes you encounter words that do not sound the way you expect.
For example, a part of the word may sound over-accentuated, or it can have strange intonation or even some sounds might be missing.
In such cases, you may find a better solution for your utterance with the \sel tag.
You can access it by three steps :
- select the word(s) (no more than 10 words) you want to change in the Editor
- right-click in Editor
- click Alternative inflection item menu
The window lets you select between 10 alternative inflections of each word in the required utterance.
Click on a utterance to listen to the new alternative inflection, which you can compare to the original renditions.
When you're happy with the new selection, click "ok" and tags \sel=altN\ will be inserted in text in the corresponding places.
If select an alternative on a word instead of the whole utterance, it can influence the pronunciation of the previous and following words in this utterance.
If you add \sel tags to a part of the sentence with the help of this editing window, and then you edit another part of the sentence, it may sometimes be possible that the first edits are affected.
To avoid this you can select subparts of the sentence delimited by punctuation signs or pauses.
![]() |
![]() |
Add word(s) into dictionary
If you feel a word is not well pronounced you can directly import it into a dictionary
Simply select it in the editor text box and right click.
Select 'Add to dictionary' option
A dictionary window will open asking you to select into which dictionary import
![]() |
Project settings
Some specific project settings can be set.
There will be applied to the entire project and so on all the project texts.
Click on the 'settings' button below the project list.
![]() |
Guides
Start using Acapela Cloud Dictionary
First of all you need to create a dictionary.
![]() |
![]() |
Then you can either :
- import a text file which contains a list of words / expressions / abbreviations (one per line)
- import an existing acapela compatible dictionary in text format
- enter manually a new word.
The default nature/transcription will appear
![]() |
For each entry in the entry list you can see its status / transcription quality




When you click on an entry, you can listen how it sounds, then update/modify the pronunciation using phonems or orthographic transcription
![]() |
When you're happy with your transcription you can click on save button to add the word.
The entry will appear on the left. You can edit it again by clicking on it.
![]() |
Finally, when you entered all the entries you wanted you can generate the dictionary file to be used by Acapela Cloud.
Simply click on the Generate button and the dictionary will be created and added to your account.
You can then use it on the Editor page or with the API
You need to generate the dictionary again every time you add, delete or update a word
If several dictionaries are loaded and contain a same word/entry, the last loaded dictionary has the priority
![]() |
Share Acapela Cloud Dictionary
If you have the dispatcher option and need to share your dictionary with your subscribers you need to go to the dispatcher menu.
You can then select the dictionary to share and give access to the subscriber(s) you want.
It will appear in the dictionary list for your subscriber(s)
![]() |
When a dictionary is shared you may want to prevent a validated entry to be modified
For that you simply need to click on the lock/unlock icon
Once locked an entry can be modified/unlocked/deleted only by the user that locked it.
![]() |



Each entry can be flagged as validated by clicking on the tick mark
![]() |
Language Manuals
Import your dictionaries and use them with the API
1 - Upload your dictionaries in your account storage space
Using the API (api/storage)
curl --silent -X POST 'https://www.acapela-cloud.com/api/storage/' -F 'file=@./dico.dic' -H 'Authorization: Token '$token -H 'Content-Type: multipart/form-data' -H 'Content-Disposition: attachment; filename=dico.dic') |
On your account page (storage section)
![]() |
2 - Use it in your TTS commands
In the demo page by selecting it in the dictionary dropdown menu, then send a command
![]() |
curl --silent -G 'https://www.acapela-cloud.com/api/command/?voice=Alice22k_HQ&text=hello&output=storage&type=mp3&dico=dico.dic&token='$token |
Migration guide from Vaas to Acapela Cloud API
If you were previously using Vaas (Voice as a Service) in your application, here are the instructions to help you migrate to Acapela Cloud
acapela-cloud-vaas-migration.pdf
Set a specific application ID to your commands
If you use mulitple applications with a same account and needs to see statistics for each application later,
you need to specify an application name or id (string) when you send a TTS command with the 'application=xxxxx' parameter
curl --silent -G 'https://www.acapela-cloud.com/api/command/?voice=Alice22k_HQ&text=hello&application=applicationname&token=yourtoken |
Then when you check your statistics you need to pass the option=application parameter,
to retrieve the statistics splitted by application name/id
Command stats with option=application
curl --silent -G 'https://www.acapela-cloud.com/api/stats/?type=credit&option=application&token=yourtoken |
comand stats : {'day': 27, 'month': 3, 'year': 2020, 'application': 'application-2', 'count': 50}
comand stats : {'day': 27, 'month': 3, 'year': 2020, 'application': 'application-1', 'count': 30}
comand stats : {'day': 30, 'month': 3, 'year': 2020, 'application': 'application-2', 'count': 25}
comand stats : {'day': 30, 'month': 3, 'year': 2020, 'application': 'application-1', 'count': 35}
comand stats : {'day': 31, 'month': 3, 'year': 2020, 'application': 'application-1', 'count': 40}
comand stats : {'day': 31, 'month': 3, 'year': 2020, 'application': 'application-2', 'count': 30}
|
Command stats with no options
curl --silent -G 'https://www.acapela-cloud.com/api/stats/?type=credit&token=yourtoken |
Command stats global
comand stats : {'day': 27, 'month': 3, 'year': 2020, 'count': 80}
comand stats : {'day': 30, 'month': 3, 'year': 2020, 'count': 60}
comand stats : {'day': 31, 'month': 3, 'year': 2020, 'count': 70}
|