A Flutter application to demonstrate how to implement Google maps and its advanced options in a flutter app.

Overview

google_maps_flutter_example

A new Flutter application to demonstrate how to implement flutter google maps in a flutter application and perfoem advanced tasks on it.

Adding Map To the App

  1. Get an API key at https://cloud.google.com/maps-platform/.

  2. Enable Google Map SDK for each platform.

    • Go to Google Developers Console.
    • Choose the project that you want to enable Google Maps on.
    • Select the navigation menu and then select "Google Maps".
    • Select "APIs" under the Google Maps menu.
    • To enable Google Maps for Android, select "Maps SDK for Android" in the "Additional APIs" section, then select "ENABLE".
    • To enable Google Maps for iOS, select "Maps SDK for iOS" in the "Additional APIs" section, then select "ENABLE".
    • Make sure the APIs you enabled are under the "Enabled APIs" section.
  3. In android/app/src/main/AndroidManifest.xml inside Application tag add your key

">
<manifest ...
  android:name="com.google.android.geo.API_KEY"
               android:value="YOUR KEY HERE"/>
  1. In ios/Runner/AppDelegate.swift add the following lines
import UIKit
import Flutter
import GoogleMaps

@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
  override func application(
    _ application: UIApplication,
    didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
  ) -> Bool {
    GMSServices.provideAPIKey("YOUR KEY HERE")
    GeneratedPluginRegistrant.register(with: self)
    return super.application(application, didFinishLaunchingWithOptions: launchOptions)
  }
}
  1. Use the GoogleMapsWidget.dart inside the lib/widget folder as normal widget and use it where you want.

Adding Custom Marker To the map

Adding normal marker

  1. Declare a Set of Markers that will be shown on the map
Set<Marker> _markers = Set<Marker>();
  1. Add the set of markers to GoogleMap widget
GoogleMap(
      markers: _markers,
  1. Update the set of markers after the map is created in onMapCreated
GoogleMap(
      onMapCreated: (GoogleMapController controller) {
              _controller.complete(controller);
              _setMapPins([LatLng(30.029585, 31.022356)]);
            }
  1. Using this function the map will be updated with the given markers on it
_setMapPins(List<LatLng> markersLocation) {
    _markers.clear();
    setState(() {
      markersLocation.forEach((markerLocation) {
        _markers.add(Marker(
          markerId: MarkerId(markerLocation.toString()),
          position: markerLocation,
        ));
      });
    });
  }

Customizing the markers

  1. Declare a BitmapDescriptor which will hold the customIcon
late BitmapDescriptor customIcon;
  1. Inside initState() Assign the needed png to the customIcon
@override
  void initState() {
    BitmapDescriptor.fromAssetImage(ImageConfiguration(size: Size(50, 50)),
            'assets/images/marker_car.png')
        .then((icon) {
      customIcon = icon;
    });
    super.initState();
  }
  1. Finally add the customIcon to the marker
Marker(
     markerId: MarkerId(markerLocation.toString()),
     position: markerLocation,
     icon: customIcon,
   )

Map Customization (Light/Dark mode)

Prepare the map styles

  1. Go to https://mapstyle.withgoogle.com/
  2. Choose the old version of the site by choosing No thanks, take me to the old style wizard
  3. You will find a lot of options, play with it until you get the desired style.
  4. Click Finish and a pop-up will show with the json code of your style, copy it and add it as a json file in your assets folder Don't forgot to mention it in your pubspec.yaml You can find two styles in the project's assets folder

Adding styles to the map

  1. Declare Strings that will hold your style's json and a bool to control which mode is shown on the map
bool mapDarkMode = true;
late String _darkMapStyle;
late String _lightMapStyle;
  1. In initState declare the styles
Future _loadMapStyles() async {
    _darkMapStyle = await rootBundle.loadString('assets/map_style/dark.json');
    _lightMapStyle = await rootBundle.loadString('assets/map_style/light.json');
  }
  1. After creating the map, set the style
onMapCreated: (GoogleMapController controller) {
          _controller.complete(controller);
          _setMapPins([LatLng(30.029585, 31.022356)]);
          _setMapStyle();
        },
Future _setMapStyle() async {
    final controller = await _controller.future;
    if (mapDarkMode)
      controller.setMapStyle(_darkMapStyle);
    else
      controller.setMapStyle(_lightMapStyle);
  }
  1. To change the style we created a button on the map using the stack widget
Positioned(
   top: 100,
   right: 30,
   child: Container(
     height: 30,
     width: 30,
     child: IconButton(
       icon: Icon(
         mapDarkMode ? Icons.brightness_4 : Icons.brightness_5,
         color: Theme.of(context).primaryColor,
       ),
       onPressed: () {
         setState(() {
           mapDarkMode = !mapDarkMode;
           _setMapStyle();
         });
       },
     ),
   )),

Drawing routes

Activating Directions API

  1. Go to Google Developers Console.
  2. Choose the project that you want to enable Google Maps on.
  3. Select the navigation menu and then select "Google Maps".
  4. Select "APIs" under the Google Maps menu.
  5. Enable Google Directions, select "Directions API" in the "Additional APIs" section, then select "ENABLE".
  6. Make sure the APIs you enabled are under the "Enabled APIs" section.

Adding route to the map

  1. Declare your start and end points
final LatLng initialLatLng = LatLng(30.029585, 31.022356);
final LatLng destinationLatLng = LatLng(30.060567, 30.962413);
  1. Declare polyline and polylineCoordinates
Set<Polyline> _polyline = {};
List<LatLng> polylineCoordinates = [];
  1. After creating the map, set the polyline
onMapCreated: (GoogleMapController controller) {
          _controller.complete(controller);
          _setMapPins([LatLng(30.029585, 31.022356)]);
          _setMapStyle();
          _addPolyLines();
        },
_addPolyLines() {
    setState(() {
      lat = (initialLatLng.latitude + destinationLatLng.latitude)/2;
      lng= (initialLatLng.longitude + destinationLatLng.longitude)/2;
      _moveCamera(13.0);
      _setPolyLines();
    });
  }
  1. To set polyline we send a get request to https://www.maps.googleapis.com/maps/api/directions/json with the start location, end location and the api key
final result = await MapRepository()
        .getRouteCoordinates(initialLatLng, destinationLatLng);
final route = result.data["routes"][0]["overview_polyline"]["points"];
  1. Then we translate the results to a polyline using the MapUtils
_polyline.add(Polyline(
    polylineId: PolylineId("tripRoute"),
    //pass any string here
    width: 3,
    geodesic: true,
    points: MapUtils.convertToLatLng(MapUtils.decodePoly(route)),
    color: Theme.of(context).primaryColor));
Comments
  • RunValueLogGC crashed

    RunValueLogGC crashed

    What version of Go are you using (go version)?

    $ go version
    go version go1.13.4 linux/amd64
    

    What version of Badger are you using?

    v2.0.0

    Does this issue reproduce with the latest master?

    Never tried

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    Linux 64 SSD

    What did you do?

    	opts := badger.DefaultOptions(dir)
    	opts.SyncWrites = sync
    	db, err := badger.Open(opts)
    	if err != nil {
    		return nil, err
    	}
    	db.RunValueLogGC(0.1)
    
    	go func() {
    		ticker := time.NewTicker(1 * time.Minute)
    		defer ticker.Stop()
    		for range ticker.C {
    			lsm, vlog := db.Size()
    			if lsm > 1024*1024*8 || vlog > 1024*1024*32 {
    				db.RunValueLogGC(0.5)
    			}
    		}
    	}()
    

    What did you expect to see?

    Run value log gc should work

    What did you see instead?

    mixin[28404]: github.com/dgraph-io/badger/v2/y.AssertTrue
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/y/error.go:55
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).doRunGC.func2
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1591
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).iterate
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:480
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).doRunGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1557
    mixin[28404]: github.com/dgraph-io/badger/v2.(*valueLog).runGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/value.go:1685
    mixin[28404]: github.com/dgraph-io/badger/v2.(*DB).RunValueLogGC
    mixin[28404]:         /home/one/GOPATH/pkg/mod/github.com/dgraph-io/badger/[email protected]/db.go:1129
    mixin[28404]: github.com/MixinNetwork/mixin/storage.openDB.func1
    mixin[28404]:         /home/one/github/mixin/storage/badger.go:68
    mixin[28404]: runtime.goexit
    mixin[28404]:         /snap/go/4762/src/runtime/asm_amd64.s:1357
    

    badger.go:68 db.RunValueLogGC(0.5)

    kind/maintenance priority/P1 status/accepted area/crash 
    opened by cedricfung 46
  • ARMv7 segmentation fault in oracle.readTs when calling loadUint64

    ARMv7 segmentation fault in oracle.readTs when calling loadUint64

    I am facing an issue running badger on an ARMv7 architecture. The minimal test case below works quite fine on an amd64 machine but, unfortunately, not on ARMv7 32bit.

    The trace below shows that the issue originates in atomic.loadUint64() but I also run basic atomic operations tests against the golang runtime, and they work fine on this architecture.

    It looks to me that the underlying memory of oracle.curRead somehow vanishes but I am not sure.

    Below you find also a strace trace. There the segmentation fault happens after the madvise, but I am not sure if this is related.

    Badger version: 1.0.1 (89689ef36cae26ae094cb5ea79b7400d839f2d68) golang version: 1.8.5 and 1.9.2

    Test case:

    func TestPersistentCache_DirectBadger(t *testing.T) {
    	dir, err := ioutil.TempDir("", "")
    	if err != nil {
    		t.Fatal(err)
    	}
    	defer os.RemoveAll(dir)
    
    	config := badger.DefaultOptions
    	config.TableLoadingMode = options.MemoryMap
    	config.ValueLogFileSize = 16 << 20
    	config.LevelOneSize = 8 << 20
    	config.MaxTableSize = 2 << 20
    	config.Dir = dir
    	config.ValueDir = dir
    	config.SyncWrites = false
    
    	db, err := badger.Open(config)
    	if err != nil {
    		t.Fatalf("cannot open db at location %s: %v", dir, err)
    	}
    
    	err = db.View(func(txn *badger.Txn) error { return nil })
    
    	if err != nil {
    		t.Fatal(err)
    	}
    
    	if err = db.Close(); err != nil {
    		t.Fatal(err)
    	}
    }
    
    === RUN   TestPersistentCache_DirectBadger
    --- FAIL: TestPersistentCache_DirectBadger (0.01s)
    panic: runtime error: invalid memory address or nil pointer dereference [recovered]
            panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x4 pc=0x1150c]
    
    goroutine 5 [running]:
    testing.tRunner.func1(0x10a793b0)
            /usr/lib/go/src/testing/testing.go:711 +0x2a0
    panic(0x3e4bd8, 0x6bb478)
            /usr/lib/go/src/runtime/panic.go:491 +0x204
    sync/atomic.loadUint64(0x10a483cc, 0x200000, 0x0)
            /usr/lib/go/src/sync/atomic/64bit_arm.go:10 +0x3c
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*oracle).readTs(0x10a483c0, 0x14, 0x5)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:87 +0x3c
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*DB).NewTransaction(0x10b06000, 0x0, 0x4cccc)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:440 +0x20
    github.com/grid-x/client/vendor/github.com/dgraph-io/badger.(*DB).View(0x10b06000, 0x464e20, 0x0, 0x0)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/vendor/github.com/dgraph-io/badger/transaction.go:457 +0x3c
    command-line-arguments.TestPersistentCache_DirectBadger(0x10a793b0)
            /home/robert/Projects/gridx/client/src/github.com/grid-x/client/pkg/cache/persistent_cache_test.go:64 +0x1e8
    testing.tRunner(0x10a793b0, 0x464e24)
            /usr/lib/go/src/testing/testing.go:746 +0xb0
    created by testing.(*T).Run
            /usr/lib/go/src/testing/testing.go:789 +0x258
    

    strace:

    [pid 15075] mmap2(NULL, 33554432, PROT_READ, MAP_SHARED, 6, 0) = 0xb4dff000                                   
    [pid 15075] madvise(0xb4dff000, 33554432, MADV_RANDOM) = 0                     
    [pid 15075] clock_gettime(CLOCK_MONOTONIC, {tv_sec=69709, tv_nsec=217038306}) = 0                            
    [pid 15075] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x4} ---
    [pid 15075] rt_sigreturn()              = 0                                       
    
    kind/bug 
    opened by gq0 35
  • Performance regression 1.6 to 2.0.2

    Performance regression 1.6 to 2.0.2

    What version of Go are you using (go version)?

    go version go1.12.7 darwin/amd64

    What version of Badger are you using?

    2.0.2 (upgrading from 1.6.0)

    Does this issue reproduce with the latest master?

    Haven't tried.

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    GCP 8 CPU (Intel Haswell), 32 GB RAM, 750 GB local ssd

    What did you do?

    Running code which extracts data from Kafka and saves to Badger DB. I'm running on exact same hardware, disk and my code against exact same Kafka topic.

    What did you expect to see?

    Better or equal performance with Badger 2.

    What did you see instead?

    Severe slowdown after writing ~1,461,000 records. See below

    1.6.0 performance:

    Performance in 1.6.0 takes about 300-400ms to extract 1000 messages.

      Up to offset 1453000 Time[330ms] Events[1453001] UrisCreated[1975] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T07:35:07.000] EstTimeToFinish[4h17m58s]
      Up to offset 1454000 Time[360ms] Events[1454001] UrisCreated[1954] PathsCreated[0] Bytes[11.2 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T11:31:43.000] EstTimeToFinish[4h18m1s]
      Up to offset 1455000 Time[340ms] Events[1455001] UrisCreated[1969] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T15:33:31.000] EstTimeToFinish[4h18m4s]
      Up to offset 1456000 Time[360ms] Events[1456001] UrisCreated[1789] PathsCreated[0] Bytes[13.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T20:46:14.000] EstTimeToFinish[4h18m7s]
      Up to offset 1457000 Time[320ms] Events[1457001] UrisCreated[1720] PathsCreated[0] Bytes[13.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T06:56:07.000] EstTimeToFinish[4h18m9s]
      Up to offset 1458000 Time[300ms] Events[1458001] UrisCreated[1736] PathsCreated[1] Bytes[10.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T18:40:17.000] EstTimeToFinish[4h18m9s]
    badger 2020/02/17 15:10:15 DEBUG: Flushing memtable, mt.size=194491818 size of flushChan: 0
    badger 2020/02/17 15:10:15 DEBUG: Storing value log head: {Fid:1 Len:45 Offset:87078740}
      Up to offset 1459000 Time[380ms] Events[1459001] UrisCreated[2140] PathsCreated[0] Bytes[11.4 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T21:04:18.000] EstTimeToFinish[4h18m13s]
      Up to offset 1460000 Time[370ms] Events[1460001] UrisCreated[1776] PathsCreated[0] Bytes[10.4 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T00:02:01.000] EstTimeToFinish[4h18m17s]
    badger 2020/02/17 15:10:16 DEBUG: Flushing memtable, mt.size=119942867 size of flushChan: 0
    badger 2020/02/17 15:10:16 DEBUG: Storing value log head: {Fid:1 Len:45 Offset:87168065}
      Up to offset 1461000 Time[430ms] Events[1461001] UrisCreated[1753] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T06:01:21.000] EstTimeToFinish[4h18m23s]
      Up to offset 1462000 Time[370ms] Events[1462001] UrisCreated[1779] PathsCreated[0] Bytes[10.5 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T16:45:03.000] EstTimeToFinish[4h18m26s]
      Up to offset 1463000 Time[360ms] Events[1463001] UrisCreated[1735] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T20:10:04.000] EstTimeToFinish[4h18m29s]
      Up to offset 1464000 Time[370ms] Events[1464001] UrisCreated[1664] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T23:03:44.000] EstTimeToFinish[4h18m33s]
      Up to offset 1465000 Time[350ms] Events[1465001] UrisCreated[1732] PathsCreated[0] Bytes[10.2 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T02:38:13.000] EstTimeToFinish[4h18m35s]
      Up to offset 1466000 Time[380ms] Events[1466001] UrisCreated[1825] PathsCreated[0] Bytes[10.6 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T06:12:39.000] EstTimeToFinish[4h18m39s]
      Up to offset 1467000 Time[360ms] Events[1467001] UrisCreated[1868] PathsCreated[0] Bytes[11.1 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T10:08:51.000] EstTimeToFinish[4h18m42s]
      Up to offset 1468000 Time[380ms] Events[1468001] UrisCreated[1920] PathsCreated[1] Bytes[11.3 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T13:54:45.000] EstTimeToFinish[4h18m46s]
      Up to offset 1469000 Time[350ms] Events[1469001] UrisCreated[1875] PathsCreated[0] Bytes[11.5 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T17:20:47.000] EstTimeToFinish[4h18m48s]
      Up to offset 1470000 Time[350ms] Events[1470001] UrisCreated[1767] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T20:41:05.000] EstTimeToFinish[4h18m51s]
      Up to offset 1471000 Time[340ms] Events[1471001] UrisCreated[1768] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T23:51:59.000] EstTimeToFinish[4h18m54s]
      Up to offset 1472000 Time[370ms] Events[1472001] UrisCreated[1758] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-14T03:28:45.000] EstTimeToFinish[4h18m57s]
    
    

    2.0.2 performance:

    Notice that at approximately offset 1462000 (1,462,000 records), things start slowing down from a rate of 300-400ms per 1,000 records to 25-30 seconds per 1,000 records! It happens after the very first Flushing memtable debug message. If you look above, the Flushing happens at the exact same place, but things continue speedily after.

      Up to offset 1453000 Time[360ms] Events[1453001] UrisCreated[1975] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T07:35:07.000] EstTimeToFinish[4h19m33s]
      Up to offset 1454000 Time[330ms] Events[1454001] UrisCreated[1954] PathsCreated[0] Bytes[11.2 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T11:31:43.000] EstTimeToFinish[4h19m35s]
      Up to offset 1455000 Time[380ms] Events[1455001] UrisCreated[1969] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T15:33:31.000] EstTimeToFinish[4h19m39s]
      Up to offset 1456000 Time[320ms] Events[1456001] UrisCreated[1789] PathsCreated[0] Bytes[13.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-10T20:46:14.000] EstTimeToFinish[4h19m41s]
      Up to offset 1457000 Time[340ms] Events[1457001] UrisCreated[1720] PathsCreated[0] Bytes[13.0 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T06:56:07.000] EstTimeToFinish[4h19m43s]
      Up to offset 1458000 Time[310ms] Events[1458001] UrisCreated[1736] PathsCreated[1] Bytes[10.3 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T18:40:17.000] EstTimeToFinish[4h19m44s]
    badger 2020/03/09 17:36:39 DEBUG: Flushing memtable, mt.size=194487650 size of flushChan: 0
    badger 2020/03/09 17:36:39 DEBUG: Storing value log head: {Fid:1 Len:32 Offset:74078864}
      Up to offset 1459000 Time[680ms] Events[1459001] UrisCreated[2140] PathsCreated[0] Bytes[11.4 MiB] TotalBytes[11.7 GiB] Date[2014-11-11T21:04:18.000] EstTimeToFinish[4h20m0s]
      Up to offset 1460000 Time[500ms] Events[1460001] UrisCreated[1776] PathsCreated[0] Bytes[10.4 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T00:02:01.000] EstTimeToFinish[4h20m8s]
    badger 2020/03/09 17:36:40 DEBUG: Flushing memtable, mt.size=119942767 size of flushChan: 0
    badger 2020/03/09 17:36:40 DEBUG: Storing value log head: {Fid:1 Len:32 Offset:74168111}
      Up to offset 1461000 Time[430ms] Events[1461001] UrisCreated[1753] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T06:01:21.000] EstTimeToFinish[4h20m14s]
      Up to offset 1462000 Time[4.74s] Events[1462001] UrisCreated[1779] PathsCreated[0] Bytes[10.5 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T16:45:03.000] EstTimeToFinish[4h23m6s]
      Up to offset 1463000 Time[14.45s] Events[1463001] UrisCreated[1735] PathsCreated[0] Bytes[11.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T20:10:04.000] EstTimeToFinish[4h32m12s]
      Up to offset 1464000 Time[19.38s] Events[1464001] UrisCreated[1664] PathsCreated[0] Bytes[10.0 MiB] TotalBytes[11.8 GiB] Date[2014-11-12T23:03:44.000] EstTimeToFinish[4h44m27s]
      Up to offset 1465000 Time[24.52s] Events[1465001] UrisCreated[1732] PathsCreated[0] Bytes[10.2 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T02:38:13.000] EstTimeToFinish[4h59m59s]
      Up to offset 1466000 Time[27.25s] Events[1466001] UrisCreated[1825] PathsCreated[0] Bytes[10.6 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T06:12:39.000] EstTimeToFinish[5h17m15s]
      Up to offset 1467000 Time[31.8s] Events[1467001] UrisCreated[1868] PathsCreated[0] Bytes[11.1 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T10:08:51.000] EstTimeToFinish[5h37m24s]
      Up to offset 1468000 Time[32.87s] Events[1468001] UrisCreated[1920] PathsCreated[1] Bytes[11.3 MiB] TotalBytes[11.8 GiB] Date[2014-11-13T13:54:45.000] EstTimeToFinish[5h58m12s]
      Up to offset 1469000 Time[28.9s] Events[1469001] UrisCreated[1875] PathsCreated[0] Bytes[11.5 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T17:20:47.000] EstTimeToFinish[6h16m27s]
      Up to offset 1470000 Time[27.58s] Events[1470001] UrisCreated[1767] PathsCreated[0] Bytes[11.3 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T20:41:05.000] EstTimeToFinish[6h33m49s]
      Up to offset 1471000 Time[30.04s] Events[1471001] UrisCreated[1768] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-13T23:51:59.000] EstTimeToFinish[6h52m44s]
      Up to offset 1472000 Time[34.09s] Events[1472001] UrisCreated[1758] PathsCreated[0] Bytes[10.8 MiB] TotalBytes[11.9 GiB] Date[2014-11-14T03:28:45.000] EstTimeToFinish[7h14m13s]
    

    I tried the same with compression disabled and saw similar results. The options I'm using are DefaultOptions with the following tweaks:

    	actualOpts := opts.
    		WithMaxTableSize(256 << 20). // max size 256M
    		WithSyncWrites(false).       // don't sync writes for faster performance
    		WithCompression(options.None)
    

    I literally just started on the 2.0 migration today. I'm running the same code I've been running for 6 months.

    kind/enhancement priority/P0 area/performance status/accepted 
    opened by dougdonohoe 30
  • Discard invalid versions of keys during compaction

    Discard invalid versions of keys during compaction

    I'm hoping this is a configuration related issue but I've played around with the settings and I keep getting the same behavior. Tested on an i3.4XL in AWS, raid0 on the two SSD drives.

    Expected behavior of the code below:

    • keys/data are stored for 1hr, after a few hours the badger directory should stay fairly constant as you write/expire keys
    • I would expect to see sst files written and multiple size levels each level a larger file size
    • memory should stay fairly consistent

    Actual behavior seen:

    • OOM's after 12 hours
    • all sst files at 67MB (thousands of them)
    • disk fills up on a 4TB drive, no data appears to ttl out
    • file counts steadily increase until oom (there's no leveling off)
    • every hour the process stalls (assuming the stall count is being hit according to profiler)

    Please advise of what is wrong in the code below, thanks!

    3HRs of runtime you can see just linear growth https://imgur.com/a/2UUfIrG

    UPDATE: I've also tried with these settings and memory doesn't grow as fast but it linearly climbs until OOM as well and the same behavior as above

    dir := "/raid0/badgertest"
    opts := badger.DefaultOptions
    opts.Dir = dir
    opts.ValueDir = dir
    opts.TableLoadingMode = options.FileIO
    opts.SyncWrites = false
    db, err := badger.Open(opts)
    
    package usecases
    
    import (
    	"github.com/dgraph-io/badger"
    	"github.com/dgraph-io/badger/options"
    	"time"
    	"fmt"
    	"encoding/binary"
    	"github.com/spaolacci/murmur3"
    	"path/filepath"
    	"os"
    	"github.com/Sirupsen/logrus"
    )
    
    type writable struct {
    	key   []byte
    	value []byte
    }
    
    
    type BadgerTest struct {
    	db *badger.DB
    }
    
    func NewBadgerTest() *BadgerTest {
    
    	dir := "/raid0/badgertest"
    	opts := badger.DefaultOptions
    	opts.Dir = dir
    	opts.ValueDir = dir
    	opts.TableLoadingMode = options.MemoryMap
    	opts.NumCompactors = 1
    	opts.NumLevelZeroTables = 20
    	opts.NumLevelZeroTablesStall = 50
    	opts.SyncWrites = false
    	db, err := badger.Open(opts)
    	if err != nil {
    		panic(fmt.Sprintf("unable to open badger db; %s", err))
    	}
    	bt := &BadgerTest{
    		db: db,
    	}
    
    	go bt.filecounts(dir)
    	return bt
    
    }
    
    func (b *BadgerTest) Start() {
    
    	workers := 4
    	for i := 0; i < workers; i++ {
    		go b.write()
    	}
    	go b.badgerGC()
    
    }
    func (b *BadgerTest) Stop() {
    
    	b.db.Close()
    	logrus.Infof("shut down badger test")
    	time.Sleep(1 * time.Second)
    }
    
    func (b *BadgerTest) filecounts(dir string) {
    
    	ticker := time.NewTicker(60 * time.Second)
    	for _ = range ticker.C {
    
    		logFiles := int64(0)
    		sstFiles := int64(0)
    		_ = filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
    
    			if filepath.Ext(path) == ".sst" {
    				sstFiles++
    			}
    			if filepath.Ext(path) == ".vlog" {
    				logFiles++
    			}
    			return nil
    		})
    
    
    		logrus.Infof("updated gauges vlog=%d sst=%d", logFiles, sstFiles)
    
    	}
    
    }
    
    func (b *BadgerTest) write() {
    
    	data := `{"randomstring":"6446D58D6DFAFD58586D3EA85A53F4A6B3CC057F933A22BB58E188A74AC8F663","refID":12495494,"testfield1234":"foo bar baz","date":"2018-01-01 12:00:00"}`
    	batchSize := 20000
    	rows := []writable{}
    	var cnt uint64
    	for {
    		cnt++
    		ts := time.Now().UnixNano()
    		buf := make([]byte, 24)
    		offset := 0
    		binary.BigEndian.PutUint64(buf[offset:], uint64(ts))
    		offset = offset + 8
    		key := fmt.Sprintf("%d%d", ts, cnt)
    		mkey := murmur3.Sum64([]byte(key))
    		binary.BigEndian.PutUint64(buf[offset:], mkey)
    
    		offset = offset + 8
    		binary.BigEndian.PutUint64(buf[offset:], cnt)
    
    		w := writable{key: buf, value: []byte(data)}
    		rows = append(rows, w)
    		if len(rows) > batchSize {
    			b.saveRows(rows)
    			rows = []writable{}
    		}
    	}
    
    }
    
    func (b *BadgerTest) saveRows(rows []writable) {
    	ttl := 1 * time.Hour
    
    	_ = b.db.Update(func(txn *badger.Txn) error {
    		var err error
    		for _, row := range rows {
    			testMsgMeter.Mark(1)
    			if err := txn.SetWithTTL(row.key, row.value, ttl); err == badger.ErrTxnTooBig {
    				logrus.Infof("TX too big, committing...")
    				_ = txn.Commit(nil)
    				txn = b.db.NewTransaction(true)
    				err = txn.SetWithTTL(row.key, row.value, ttl)
    			}
    		}
    		return err
    	})
    }
    
    func (b *BadgerTest) badgerGC() {
    
    	ticker := time.NewTicker(1 * time.Minute)
    	for {
    		select {
    		case <-ticker.C:
    			logrus.Infof("CLEANUP starting to purge keys %s", time.Now())
    			err := b.db.PurgeOlderVersions()
    			if err != nil {
    				logrus.Errorf("badgerOps unable to purge older versions; %s", err)
    			}
    			err = b.db.RunValueLogGC(0.5)
    			if err != nil {
    				logrus.Errorf("badgerOps unable to RunValueLogGC; %s", err)
    			}
    			logrus.Infof("CLEANUP purge complete %s", time.Now())
    		}
    	}
    }
    
    
    
    kind/enhancement 
    opened by jiminoc 26
  • GC doesn't work? (not cleaning up SST files properly)

    GC doesn't work? (not cleaning up SST files properly)

    What version of Go are you using (go version)?

    $ go version
    1.13.8
    

    What version of Badger are you using?

    v1.6.0

    opts := badger.DefaultOptions(fmt.Sprintf(dir + "/" + name)) opts.SyncWrites = false opts.ValueLogLoadingMode = options.FileIO

    Does this issue reproduce with the latest master?

    With the latest master GC becomes much slower

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    2TB NVME drive, 128 GB RAM

    What did you do?

    I have a Kafka topic with 12 partitions. For every partition I create a database. Each database grows quite quickly (about 12*30GB per hour) and the TTL for most of the events is 1h, so the size should stay at constant level. Now for every partition I create a separate transaction and I process read and write operations sequentially, there is no concurrency, when the transaction is getting to big I commit it, in separate go-routine I start RunValueLogGC(0.5). Most of GC runs end up with ErrNoRewrite. Even tried to repeat RunValueLogGC until I have 5 errors in the row, but still I was running out of disk space quite quickly. My current fix is to patch the Badger GC, make it run on every fid that is before the head. This works fine, but eventually becomes slow when I have too many log files.

    What did you expect to see?

    The size of each of twelve databases I created, should stay at constant level and has less then 20 GB

    What did you see instead?

    After running it for a day, if I look at one of twelve databases, I see 210 sst files, 68 vlog files, db size is 84 GB (and these numbers keep growing).

    If I run badger histogram it shows me this stats:

    Histogram of key sizes (in bytes) Total count: 4499955 Min value: 13 Max value: 108 Mean: 22.92 Range Count [ 8, 16) 2 [ 16, 32) 4499939 [ 64, 128) 14

    Histogram of value sizes (in bytes) Total count: 4499955 Min value: 82 Max value: 3603 Mean: 2428.16 Range Count [ 64, 128) 1 [ 256, 512) 19301 [ 512, 1024) 459 [ 1024, 2048) 569 [ 2048, 4096) 4479625

    2428*4479625=10GB

    kind/bug priority/P1 status/accepted area/gc 
    opened by adwinsky 25
  • Use pure Go based ZSTD implementation

    Use pure Go based ZSTD implementation

    Fixes https://github.com/dgraph-io/badger/issues/1162

    This PR proposes to use https://github.com/klauspost/compress/tree/master/zstd instead of CGO based https://github.com/DataDog/zstd .

    This PR also removes the CompressionLevel options since https://github.com/klauspost/compress/tree/master/zstd supports only two levels of ZSTD Compression. The default level is ZSTD Level 3 and the fastest level is ZSTD level 1. ZSTD level 1 will be the default level in badger.

    I've experimented will all the suggestions mentioned in https://github.com/klauspost/compress/issues/196#issuecomment-568905095 . Setting WithSingleSegment didn't seem to make a lot of speed difference (~ 1MB/s difference) WithNoEntropyCompression seemed to have ~ 3% of speed improvement (but that could also be because of non-deterministic nature of benchmarks)

    name                                       old time/op      new time/op (NoEntropy set)   delta
    Compression/ZSTD_-_Go_-_level1-16           35.7µs ± 1%     36.9µs ± 5%                 +3.41%  (p=0.008 n=5+5)
    Decompression/ZSTD_-_Go-16                  16.0µs ± 0%     15.9µs ± 1%                 -0.77%  (p=0.016 n=5+5)
    
    name                                    old speed      new speed (NoEntropy set)      delta
    Compression/ZSTD_-_Go_-_level1-16      115MB/s ± 1%   111MB/s ± 5%                -3.24%  (p=0.008 n=5+5)
    Decompression/ZSTD_-_Go-16             256MB/s ± 0%   258MB/s ± 1%                 +0.78%  (p=0.016 n=5+5)
    

    Benchmarks

    1. Table Data (contains some randomly generated data).
    Compression Ratio Datadog ZSTD level 1 3.1993720565149135
    Compression Ratio Datadog ZSTD level 3 3.099619771863118
    
    Compression Ratio Go ZSTD 3.2170481452249406
    Compression Ratio Go ZSTD level 3 3.1474903474903475
    
    name                                        time/op
    Compression/ZSTD_-_Datadog-level1-16    17.6µs ± 3%
    Compression/ZSTD_-_Datadog-level3-16    20.7µs ± 3%
    
    Compression/ZSTD_-_Go_-_level1-16       27.8µs ± 2%
    Compression/ZSTD_-_Go_-_Default-16      39.1µs ± 1%
    
    Decompression/ZSTD_-_Datadog-16         7.12µs ± 2%
    Decompression/ZSTD_-_Go-16              13.7µs ± 2%
    
    name                                       speed
    Compression/ZSTD_-_Datadog-level1-16   231MB/s ± 3%
    Compression/ZSTD_-_Datadog-level3-16   197MB/s ± 3%
    
    Compression/ZSTD_-_Go_-_level1-16      147MB/s ± 2%
    Compression/ZSTD_-_Go_-_Default-16     104MB/s ± 1%
    
    Decompression/ZSTD_-_Datadog-16        573MB/s ± 2%
    Decompression/ZSTD_-_Go-16             298MB/s ± 2%
    
    1. 4KB of text taken from https://gist.github.com/StevenClontz/4445774
    Compression Ratio ZSTD level 1 1.9294781382228492
    Compression Ratio ZSTD level 3 1.9322033898305084
    
    Compression Ratio Go ZSTD 1.894736842105263
    Compression Ratio Go ZSTD level 3 1.927665570690465
    
    name                                       time/op
    Compression/ZSTD_-_Datadog-level1-16    22.7µs ± 4%
    Compression/ZSTD_-_Datadog-level3-16    29.6µs ± 4%
    
    Compression/ZSTD_-_Go_-_level1-16       35.7µs ± 1%
    Compression/ZSTD_-_Go_-_Default-16      97.9µs ± 1%
    
    Decompression/ZSTD_-_Datadog-16         8.36µs ± 0%
    Decompression/ZSTD_-_Go-16              16.0µs ± 0%
    
    name                                       speed
    Compression/ZSTD_-_Datadog-level1-16   181MB/s ± 4%
    Compression/ZSTD_-_Datadog-level3-16   139MB/s ± 4%
    
    Compression/ZSTD_-_Go_-_level1-16      115MB/s ± 1%
    Compression/ZSTD_-_Go_-_Default-16    41.9MB/s ± 1%
    
    Decompression/ZSTD_-_Datadog-16        489MB/s ± 2%
    Decompression/ZSTD_-_Go-16             256MB/s ± 0%
    

    Here's the script I've used https://gist.github.com/jarifibrahim/91920e93d1ecac3006b269e0c05d6a24


    This change is Reviewable

    opened by jarifibrahim 25
  • Support encryption at rest

    Support encryption at rest

    Hi, Currently there is no authentication support. It will be a great feature to have. We are using badger for developing a banking solution and data privacy is a requirement. Kindly let me know if you can incorporate the security feature.

    Regards, Asim.

    priority/P2 area/security status/accepted kind/feature exp/expert 
    opened by asimpatnaik 25
  • Improve GC strategy to reclaim multiple logs

    Improve GC strategy to reclaim multiple logs

    Hello,

    let's take the following scenario:

    • open a database
    • insert 1M key/values in badgers, with distinct keys
    • delete all the key values
    • run PurgeOlderVersions()
    • run RunValueLogGC(0.5)
    • close the database

    Then the db directory has still a large size. It looks like disk space was not reclaimed. Am i doing something wrong ?

    Moreover, when i iterate over the now empty database, iteration time is still quite long, but no result is returned of course.

    Thanks, Stephane

    kind/enhancement kind/question 
    opened by stephane-martin 22
  • Mobile support.

    Mobile support.

    I currently use boltdb on mobiles. In bolts readme there are some minor adjustments required for mobiles.

    The code is then compiled into an aar or framework file for each is using gomobile.

    It's stupid easy to us :)

    Would the team be open to looking into mobile support ?

    kind/bug area/documentation priority/P2 status/more-info-needed 
    opened by joeblew99 22
  • BadgerDB open() call takes long time (> 2 min) to complete

    BadgerDB open() call takes long time (> 2 min) to complete

    What version of Go are you using (go version)?

    $ go version
    go version go1.13.3 linux/amd64
    

    What version of Badger are you using?

    github.com/dgraph-io/badger v1.6.0

    Does this issue reproduce with the latest master?

    Yes

    What are the hardware specifications of the machine (RAM, OS, Disk)?

    RAM - 16GB OS - Ubuntu 16.04 Disk - SSD

    What did you do?

    We are using BadgerDB for deduplication. We store message ID as the key and the value as nil. We open the badgerdb during the initialization.

    gateway.badgerDB, err = badger.Open(badger.DefaultOptions(path))
    

    Code that writes to badger DB

    		err := badgerDB.Update(func(txn *badger.Txn) error {
    			for _, messageID := range messageIDs {
    				e := badger.NewEntry([]byte(messageID), nil).WithTTL(dedupWindow * time.Second)
    				if err := txn.SetEntry(e); err == badger.ErrTxnTooBig {
    					_ = txn.Commit()
    					txn = badgerDB.NewTransaction(true)
    					_ = txn.SetEntry(e)
    				}
    			}
    			return nil
    		})
    
    $ du -ch -d 1 ./badgerdb
    18G	./badgerdb
    18G	total
    
    $ ls -l ./badgerdb/ | grep sst | wc -l
    270
    

    Over 1 day, we have 270 SST files and 18 GB data.

    What did you expect to see?

    The badger.Open call completing in a few seconds.

    What did you see instead?

    The badger.Open takes around 2.5 minutes to open 270 files.

    kind/enhancement priority/P2 area/performance status/accepted 
    opened by SumanthPuram 21
  • Infinite recursion in Item.yieldItemValue ?

    Infinite recursion in Item.yieldItemValue ?

    Hi,

    I face a difficult to debug problem with badger. It happens in the following situation:

    • ingest a lot of data (say 1M key-values)
    • delete that data
    • stop the program (properly closing the badger database)
    • relaunch the program

    Then it can happen that when the program reopens the badger database, go panics with a "runtime: goroutine stack exceeds 1000000000-byte limit".

    Further tries to start the program then always face a panic.

    The problem might be in my code of course, but I can't find anything strange. I disabled everything except opening the database and iterating over key values, and panic still happens.

    The traces show:

    goroutine 1 [running]:
    runtime.makeslice(0xef4340, 0x28, 0x28, 0xc425764000, 0x0, 0x7ff73adb46c8)
            /usr/local/go/src/runtime/slice.go:46 +0xf7 fp=0xc44cd70348 sp=0xc44cd70340 pc=0x4470f7
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).parseKV(0xc42d3aa990, 0xf00140000, 0xffffffff)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:114 +0x4bf fp=0xc44cd70430 sp=0xc44cd70348 pc=0xc749cf
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Next(0xc42d3aa990)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:154 +0x191 fp=0xc44cd70480 sp=0xc44cd70430 pc=0xc74bd1
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Init(0xc42d3aa990)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:54 +0x3d fp=0xc44cd70498 sp=0xc44cd70480 pc=0xc7414d
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*blockIterator).Seek(0xc42d3aa990, 0xc42d3a4cc0, 0x2b, 0x30, 0x0)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:84 +0x153 fp=0xc44cd704e8 sp=0xc44cd70498 pc=0xc74303
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seekHelper(0xc42d3a2600, 0x0, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:270 +0x11f fp=0xc44cd70550 sp=0xc44cd704e8 pc=0xc7551f
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seekFrom(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30, 0x0)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:300 +0x12f fp=0xc44cd705b8 sp=0xc44cd70550 pc=0xc756bf
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).seek(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:316 +0x55 fp=0xc44cd705f0 sp=0xc44cd705b8 pc=0xc75815
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table.(*Iterator).Seek(0xc42d3a2600, 0xc42d3a4cc0, 0x2b, 0x30)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/table/iterator.go:417 +0x82 fp=0xc44cd70620 sp=0xc44cd705f0 pc=0xc75f92
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*levelHandler).get(0xc4203ae8a0, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/level_handler.go:253 +0x265 fp=0xc44cd706f8 sp=0xc44cd70620 pc=0xc8acc5
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*levelsController).get(0xc420393e30, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/levels.go:727 +0xf6 fp=0xc44cd70820 sp=0xc44cd706f8 pc=0xc90e76
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*DB).get(0xc42040c700, 0xc42d3a4cc0, 0x2b, 0x30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/db.go:507 +0x1fd fp=0xc44cd70940 sp=0xc44cd70820 pc=0xc818fd
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*Item).yieldItemValue(0xc4204202c0, 0xc42d3a4c30, 0x2b, 0x30, 0x2, 0x0, 0xc42d392c23)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/iterator.go:169 +0x414 fp=0xc44cd70aa8 sp=0xc44cd70940 pc=0xc86f94
    github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger.(*Item).yieldItemValue(0xc4204202c0, 0xc42d3a4ba0, 0x2b, 0x30, 0x2, 0x0, 0xc42d392c03)
            /home/stef/skewer-gopath/src/github.com/stephane-martin/skewer/vendor/github.com/dgraph-io/badger/iterator.go:178 +0x4d2 fp=0xc44cd70c10 sp=0xc44cd70aa8 pc=0xc87052
    

    And so on afterward. The calls to yieldItemValue stack until explosion.

    kind/bug 
    opened by stephane-martin 21
  • options: `WithBlockCacheSize` documentation not correct about `BlockCacheSize` default value

    options: `WithBlockCacheSize` documentation not correct about `BlockCacheSize` default value

    What version of Badger is the target?

    [email protected] and master

    Documentation.

    BlockCacheSize: This is set to 256<<20(256Mib?) when using the default options but the WithBlockCacheSize method says the default value is zero.

    Additional information.

    https://github.com/dgraph-io/badger/blob/main/options.go#L163 https://github.com/dgraph-io/badger/blob/main/options.go#L710

    area/documentation 
    opened by ukane-philemon 0
  • chore(iterator): `yieldItemValue` no error return

    chore(iterator): `yieldItemValue` no error return

    Problem

    I was trying to trigger an error from func (item *Item) ValueCopy(dst []byte) ([]byte, error) in a unit test, but it turns out errors cannot happen. This just makes internal code clearer no error can happen.

    Solution

    1. Change func (item *Item) yieldItemValue() ([]byte, func(), err error) -> func (item *Item) yieldItemValue() ([]byte, func())
    2. Keep exported methods with an error returned to keep compatibility, although some such as ValueCopy cannot produce a non-nil error (as it was before)
    opened by qdm12 1
  • [BUG]: badger v3 memory leak

    [BUG]: badger v3 memory leak

    What version of Badger are you using?

    No response

    What version of Go are you using?

    go 1.19.4

    Have you tried reproducing the issue with the latest release?

    None

    What is the hardware spec (RAM, CPU, OS)?

    a container schedule by k8s

    8GB 4Core X86 Linux

    What steps will reproduce the bug?

    it will OOM

    Expected behavior and actual result.

    No response

    Additional information

    `package main

    import ( "time" "fmt"

    "github.com/google/uuid"
    badger "github.com/dgraph-io/badger/v3"
    "github.com/dgraph-io/badger/v3/options"
    

    )

    func main() {

    ch := make(chan bool)
    
    db, err := badger.Open(badger.DefaultOptions("/pv_data/ryou.zhang/temp/data").
    	WithCompression(options.None).
    	WithIndexCacheSize(256<<20).
    	WithBlockCacheSize(0))
    if err != nil {
    	panic(err)
    }
    defer db.Close()
    
    
    for i:=0; i<100; i++ {
    go func() {
    	for {
    		key := uuid.NewString()
    		raw := make([]byte, 1024*64)
    
    		txn := db.NewTransaction(true)
    		txn.SetEntry(badger.NewEntry([]byte(key), raw))
    		txn.Commit()
    
    
    		// txn := db.NewTransactionAt(uint64(time.Now().UnixNano()), true)
    		// txn.SetEntry(badger.NewEntry([]byte(key), raw))
    		// txn.Commit()
    		<-time.After(1 * time.Millisecond)
    	}
    }()
    }
    go func() {
    	for {
    		fmt.Println("cost:",(db.IndexCacheMetrics().CostAdded()- db.IndexCacheMetrics().CostEvicted())/1024.0/1024.0, "MB item:", db.IndexCacheMetrics().KeysAdded() - db.IndexCacheMetrics().KeysEvicted())
    		<-time.After(1000 * time.Millisecond)
    	}
    }()
    <-ch
    

    } `

    code like before, when using v3 it will oom, but use v2 it's ok

    kind/bug 
    opened by RyouZhang 0
  • feat(Publisher): Add DB.SubscribeAsync API.

    feat(Publisher): Add DB.SubscribeAsync API.

    Problem

    In one of my personal projects, I have an API that uses DB.Subscribe to susbcribe to changes to the DB and add these changes to an unbounded queue. An over-simplified version of it would be:

    func (x *X) Watch() {
         go func() {
            _ = x.db.Subscribe(
              context.Background(),
              func(kvs *pb.KVList) error {
                  x.queue.Add(kvs)
                  return nil
              },
              []pb.Match{{Prefix: []byte{"foobar"}}})
         }()
    }
    

    The way I test it, in psudo-Go, is:

    func TestWatch() {
        x := ...
    
        x.Watch()
    
        doChangesToDb(x.db)
    
        verifyQueue(x.queue)
    }
    

    The problem, as I hope you can see, is a race condition. There's no guarantee I have actually subscribed before I exit x.Watch(). By the time I call doChangesToDb(x.db), depending on the timing of the goroutine in x.Watch(), I might miss some or even all changes. Because DB.Subscribe is blocking, there's no way to know for certain that you have actually subscribed, in case you need to know. The only guaranteed way is to wait for the first cb call, but that's not always convenient or even possible. The next best workaround is to wait for the moment just before the DB.Subscribe call:

    func (x *X) Watch() {
         wg := sync.WaitGroup{}
         wg.Add(1)
         go func() {
            wg.Done()
            _ = x.db.Subscribe(
              context.Background(),
              func(kvs *pb.KVList) error {
                  x.queue.Add(kvs)
                  return nil
              },
              []pb.Match{{Prefix: []byte{"foobar"}}})
         }()
         wg.Wait()
    }
    

    This workaround can be seen used extensively on publisher_test.go. The problem with it is that, although very likely to work, it is not guaranteed. You see, Golang reserves the right to preempt any goroutine, even if they aren't blocked. The Go scheduler will mark any goroutine that takes more than 10ms as preemptible. If the time between the wg.Done() call and the db.pub.newSubscriber(c, matches) call (inside DB.Subscribe) is just long enough, the goroutine might be preempted and you will end up with the same problem as before. Who knows. Maybe GC kicked in at the wrong time. Although this is very unlikely to happen, I would sleep much better if it were actually impossible (I wish to depend on this behaviour not only for the tests, but for the actual correctness of my project).

    Solution

    I hope it became clear that the problem is caused by the API being blocking. The solution then, is to add a non-blocking version of the API. The proposed API receives only the []pb.Match query, and returns a <-chan *KVList channel and a UnsubscribeFunc function. The channel is to be used by consumers to read the changes, while the function is how you cancel the operation. I believe this API to be much more idiomatic Go, as it uses channels for communication, making it possible for the caller to select and for range on it. You can see how much simpler the calling code becomes in the new publisher_test.go, where I add a new version of each test using the new API, while keeping the old tests intact.

    I have also rewritten the original DB.Subscribe to use the new DB.SubscribeAsync underneath, so as to reuse code, and make both behaviours are the same.

    This is my first PR to badger. Please, be kind :). Also, thank you for the awesome project and for any time spent reviewing this PR. You folks rock!

    opened by rigelbm 1
  • [BUG]: Building a plugin that uses badger fails

    [BUG]: Building a plugin that uses badger fails

    What version of Badger are you using?

    github.com/dgraph-io/badger/v3 v3.2103.4

    What version of Go are you using?

    GOVERSION="go1.19.1"

    Have you tried reproducing the issue with the latest release?

    Yes

    What is the hardware spec (RAM, CPU, OS)?

    Macbook Pro 2021 Intel Core i5 w/ 16GB RAM

    What steps will reproduce the bug?

    create a plugin that uses badger, try to compile that plugin with go build -buildmode=plugin

    Expected behavior and actual result.

    Compilation should succeed and a shared object should be produced.

    Instead the following error is reported:

    # github.com/cespare/xxhash
    asm: xxhash_amd64.s:120: when dynamic linking, R15 is clobbered by a global variable access and is used here: 00092 (/Users/jasonfowler/go/pkg/mod/github.com/cespare/[email protected]/xxhash_amd64.s:120)       ADDQ    R15, AX
    asm: assembly failed
    

    Additional information

    No response

    kind/bug 
    opened by jasonf-trustgrid 0
  • Revisit configurable logging

    Revisit configurable logging

    I was reading the comments and was quite disappointed in the solution from a few years back regarding logging. Just because I don’t want to see info messages, does not mean I don’t want to see warnings and errors. The problem is that the GO logging that is built in by default so so weak, it is not really ready for true commercial work. Being able to set the logger to nil throws the baby out with the bath water. Has there been any attempt to revisit this solution, to use logrus or zap? Especially with a database, logging is so important for it to be this badly implemented

    opened by kfries 4
Releases(v3.2103.5)
  • v3.2103.5(Dec 15, 2022)

  • v3.2103.4(Nov 4, 2022)

    This patches an issue that could lead to manifest corruption. Fix was merged in #1756. Addresses this issue on Discuss andthis issue on Badger. We also bring the release branch to parity with main by updating the CI/CD jobs, Readme, Codeowners, PR and issue templates, etc.

    Fixed

    • fix(manifest): fix manifest corruption due to race condition in concurrent compactions (#1756)

    Chores

    • Add CI/CD jobs to release branch
    • Add PR and Issue templates to release branch
    • Update Codeowners in release branch
    • Update Readme in release branch

    Full Changelog: https://github.com/dgraph-io/badger/compare/v3.2103.3...v3.2103.4

    Source code(tar.gz)
    Source code(zip)
    badger-checksum-linux-amd64.sha256(65 bytes)
    badger-linux-amd64.tar.gz(9.01 MB)
  • v3.2103.3(Oct 14, 2022)

  • v3.2103.2(Oct 7, 2021)

    This patch release contains:

    Fixed

    • fix(compact): close vlog after the compaction at L0 has been completed (#1752)
    • fix(builder): put the upper limit on reallocation (#1748)
    • deps: Bump github.com/google/flatbuffers to v1.12.1 (#1746)
    • fix(levels): Avoid a deadlock when acquiring read locks in levels (#1744)
    • fix(pubsub): avoid deadlock in publisher and subscriber (#1749) (#1751)

    Full Changelog: https://github.com/dgraph-io/badger/compare/v3.2103.1...v3.2103.2

    Source code(tar.gz)
    Source code(zip)
  • v2.2007.4(Aug 25, 2021)

    Fixed

    • Fix build on Plan 9 (#1451) (#1508) (#1738)

    Features

    • feat(zstd): backport replacement of DataDog's zstd with Klauspost's zstd (#1736)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.3(Jul 21, 2021)

    This patch release contains:

    Fixed

    • fix(maxVersion): Use choosekey instead of KeyToList (#1532) #1533
    • fix(flatten): Add --num_versions flag (#1518) #1520
    • fix(build): Fix integer overflow on 32-bit architectures #1558
    • fix(pb): avoid protobuf warning due to common filename (#1519)

    Features

    • Add command to stream contents of DB into another DB. (#1486)

    New APIs

    • DB.StreamDB
    • DB.MaxVersion
    Source code(tar.gz)
    Source code(zip)
  • v3.2103.1(Jul 8, 2021)

    This release removes CGO dependency opf badger by using Klauspost's ZSTD instead of Datadog's ZSTD. Also, this has some of the fixes.

    Fixed

    • fix(compaction): copy over the file ID when building tables #1713
    • fix: Fix conflict detection for managed DB (#1716)
    • fix(pendingWrites): don't skip the pending entries with version=0 (#1721)

    Features

    • feat(zstd): replace datadog's zstd with Klauspost's zstd (#1709)
    Source code(tar.gz)
    Source code(zip)
    badger-checksum-linux-amd64.sha256(65 bytes)
    badger-linux-amd64.tar.gz(8.06 MB)
  • v3.2103.0(Jun 3, 2021)

    Breaking

    • Subscribe: Add option to subscribe with holes in prefixes. (#1658)

    Fixed

    • fix(compaction): Remove compaction backoff mechanism (#1686)
    • Add a name to mutexes to make them unexported (#1678)
    • fix(merge-operator): don't read the deleted keys (#1675)
    • fix(discard): close the discard stats file on db close (#1672)
    • fix(iterator): fix iterator when data does not exist in read only mode (#1670)
    • fix(badger): Do not reuse variable across badger commands (#1624)
    • fix(dropPrefix): check properly if the key is present in a table (#1623)

    Performance

    • Opt(Stream): Optimize how we deduce key ranges for iteration (#1687)
    • Increase value threshold from 1 KB to 1 MB (#1664)
    • opt(DropPrefix): check if there exist some data to drop before dropping prefixes (#1621)

    Features

    • feat(options): allow special handling and checking when creating options from superflag (#1688)
    • overwrite default Options from SuperFlag string (#1663)
    • Support SinceTs in iterators (#1653)
    • feat(info): Add a flag to parse and print DISCARD file (#1662)
    • feat(vlog): making vlog threshold dynamic 6ce3b7c (#1635)
    • feat(options): add NumGoroutines option for default Stream.numGo (#1656)
    • feat(Trie): Working prefix match with holes (#1654)
    • feat: add functionality to ban a prefix (#1638)
    • feat(compaction): Support Lmax to Lmax compaction (#1615)

    New APIs

    • Badger.DB
      • BanNamespace
      • BannedNamespaces
      • Ranges
    • Badger.Options
      • FromSuperFlag
      • WithNumGoRoutines
      • WithNamespaceOffset
      • WithVLogPercentile
    • Badger.Trie
      • AddMatch
      • DeleteMatch
    • Badger.Table
      • StaleDataSize
    • Badger.Table.Builder
      • AddStaleKey
    • Badger.InitDiscardStats

    Removed APIs

    • Badger.DB
      • KeySplits
    • Badger.Options
      • SkipVlog

    Changed APIs

    • Badger.DB
      • Subscribe
    • Badger.Options
      • WithValueThreshold
    Source code(tar.gz)
    Source code(zip)
  • v3.2011.1(Jan 22, 2021)

    fix(compaction): Set base level correctly after stream (#1631) (#1651) fix: update ristretto and use filepath (#1649) (#1652) fix(badger): Do not reuse variable across badger commands (#1624) (#1650) fix(build): fix 32-bit build (#1627) (#1646) fix(table): always sync SST to disk (#1625) (#1645)

    Source code(tar.gz)
    Source code(zip)
  • v3.2011.0(Jan 15, 2021)

    This release is not backward compatible with Badger v2.x.x

    Breaking:

    • opt(compactions): Improve compaction performance (#1574)
    • Change how Badger handles WAL (#1555)
    • feat(index): Use flatbuffers instead of protobuf (#1546)

    Fixed:

    • Fix(GC): Set bits correctly for moved keys (#1619)
    • Fix(tableBuilding): reduce scope of valuePointer (#1617)
    • Fix(compaction): fix table size estimation on compaction (#1613)
    • Fix(OOM): Reuse pb.KVs in Stream (#1609)
    • Fix race condition in L0StallMs variable (#1605)
    • Fix(stream): Stop produceKVs on error (#1604)
    • Fix(skiplist): Remove z.Buffer from skiplist (#1600)
    • Fix(readonly): fix the file opening mode (#1592)
    • Fix: Disable CompactL0OnClose by default (#1586)
    • Fix(compaction): Don't drop data when split overlaps with top tables (#1587)
    • Fix(subcompaction): Close builder before throttle.Done (#1582)
    • Fix(table): Add onDisk size (#1569)
    • Fix(Stream): Only send done markers if told to do so
    • Fix(value log GC): Fix a bug which caused value log files to not be GCed.
    • Fix segmentation fault when cache sizes are small. (#1552)
    • Fix(builder): Too many small tables when compression is enabled (#1549)
    • Fix integer overflow error when building for 386 (#1541)
    • Fix(writeBatch): Avoid deadlock in commit callback (#1529)
    • Fix(db): Handle nil logger (#1534)
    • Fix(maxVersion): Use choosekey instead of KeyToList (#1532)
    • Fix(Backup/Restore): Keep all versions (#1462)
    • Fix(build): Fix nocgo builds. (#1493)
    • Fix(cleanup): Avoid truncating in value.Open on error (#1465)
    • Fix(compaction): Don't use cache for table compaction (#1467)
    • Fix(compaction): Use separate compactors for L0, L1 (#1466)
    • Fix(options): Do not implicitly enable cache (#1458)
    • Fix(cleanup): Do not close cache before compaction (#1464)
    • Fix(replay): Update head for LSM entires also (#1456)
    • fix(levels): Cleanup builder resources on building an empty table (#1414)

    Performance

    • perf(GC): Remove move keys (#1539)
    • Keep the cheaper parts of the index within table struct. (#1608)
    • Opt(stream): Use z.Buffer to stream data (#1606)
    • opt(builder): Use z.Allocator for building tables (#1576)
    • opt(memory): Use z.Calloc for allocating KVList (#1563)
    • opt: Small memory usage optimizations (#1562)
    • KeySplits checks tables and memtables when number of splits is small. (#1544)
    • perf: Reduce memory usage by better struct packing (#1528)
    • perf(tableIterator): Don't do next on NewIterator (#1512)
    • Improvements: Manual Memory allocation via Calloc (#1459)
    • Various bug fixes: Break up list and run DropAll func (#1439)
    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Commit does not panic after Finish, instead returns an error (#1396)
    • levels: Compaction incorrectly drops some delete markers (#1422)
    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)

    Features:

    • Use opencensus for tracing (#1566)
    • Export functions from Key Registry (#1561)
    • Allow sizes of block and index caches to be updated. (#1551)
    • Add metric for number of tables being compacted (#1554)
    • feat(info): Show index and bloom filter size (#1543)
    • feat(db): Add db.MaxVersion API (#1526)
    • Expose DB options in Badger. (#1521)
    • Feature: Add a Calloc based Buffer (#1471)
    • Add command to stream contents of DB into another DB. (#1463)
    • Expose NumAlloc metrics via expvar (#1470)
    • Support fully disabling the bloom filter (#1319)
    • Add --enc-key flag in badger info tool (#1441)

    New APIs

    • Badger.DB
      • CacheMaxCost (#1551)
      • Levels (#1574)
      • LevelsToString (#1574)
      • Opts (#1521)
    • Badger.Options
      • WithBaseLevelSize (#1574)
      • WithBaseTableSize (#1574)
      • WithMemTableSize (#1574)
    • Badger.KeyRegistry
      • DataKey (#1561)
      • LatestDataKey (#1561)

    Removed APIs

    • Badger.Options
      • WithKeepL0InMemory (#1555)
      • WithLevelOneSize (#1574)
      • WithLoadBloomsOnOpen (#1555)
      • WithLogRotatesToFlush (#1574)
      • WithMaxTableSize (#1574)
      • WithTableLoadingMode (#1555)
      • WithTruncate (#1555)
      • WithValueLogLoadingMode (#1555)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.2(Sep 11, 2020)

    Fixed

    • Fix Sequence generates duplicate values (#1281)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • Restore: Account for value size as well (#1358)
    • GC: Consider size of value while rewriting (#1357)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)
    • Levels: Compaction incorrectly drops some delete markers (#1422)
    • Fix(replay) - Update head for LSM entries also (#1456)
    • Fix(Backup/Restore): Keep all versions (#1462)
    • Fix build on Plan 9 (#1451)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.2(Sep 1, 2020)

    Fixed

    • Compaction: Use separate compactors for L0, L1 (#1466)
    • Rework Block and Index cache (#1473)
    • Add IsClosed method (#1478)
    • Cleanup: Avoid truncating in vlog.Open on error (#1465)
    • Cleanup: Do not close cache before compactions (#1464)

    New APIs

    • Badger.DB
      • BlockCacheMetrics (#1473)
      • IndexCacheMetrics (#1473)
    • Badger.Option
      • WithBlockCacheSize (#1473)
      • WithIndexCacheSize (#1473)

    Removed APIs [Breaking Changes]

    • Badger.DB
      • DataCacheMetrics (#1473)
      • BfCacheMetrics (#1473)
    • Badger.Option
      • WithMaxCacheSize (#1473)
      • WithMaxBfCacheSize (#1473)
      • WithKeepBlockIndicesInCache (#1473)
      • WithKeepBlocksInCache (#1473)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.1(Aug 18, 2020)

    Fixed

    • Remove vlog file if bootstrap, syncDir or mmap fails (#1434)
    • levels: Compaction incorrectly drops some delete markers (#1422)
    • Replay: Update head for LSM entires also (#1456)
    Source code(tar.gz)
    Source code(zip)
  • v2.2007.0(Aug 18, 2020)

    Fixed

    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This section lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v20.07.0(Aug 11, 2020)

    Fixed

    • Add a limit to the size of the batches sent over a stream. (#1412)
    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This section lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v20.07.0-rc3(Jul 21, 2020)

  • v20.07.0-rc2(Jul 15, 2020)

  • v20.07.0-rc1(Jul 11, 2020)

    Fixed

    • Fix Sequence generates duplicate values (#1281)
    • Fix race condition in DoesNotHave (#1287)
    • Fail fast if cgo is disabled and compression is ZSTD (#1284)
    • Proto: make badger/v2 compatible with v1 (#1293)
    • Proto: Rename dgraph.badger.v2.pb to badgerpb2 (#1314)
    • Handle duplicates in ManagedWriteBatch (#1315)
    • Ensure bitValuePointer flag is cleared for LSM entry values written to LSM (#1313)
    • DropPrefix: Return error on blocked writes (#1329)
    • Confirm badgerMove entry required before rewrite (#1302)
    • Drop move keys when its key prefix is dropped (#1331)
    • Iterator: Always add key to txn.reads (#1328)
    • Restore: Account for value size as well (#1358)
    • Compaction: Expired keys and delete markers are never purged (#1354)
    • GC: Consider size of value while rewriting (#1357)
    • Force KeepL0InMemory to be true when InMemory is true (#1375)
    • Rework DB.DropPrefix (#1381)
    • Update head while replaying value log (#1372)
    • Avoid panic on multiple closer.Signal calls (#1401)
    • Return error if the vlog writes exceeds more than 4GB (#1400)

    Performance

    • Clean up transaction oracle as we go (#1275)
    • Use cache for storing block offsets (#1336)

    Features

    • Support disabling conflict detection (#1344)
    • Add leveled logging (#1249)
    • Support entry version in Write batch (#1310)
    • Add Write method to batch write (#1321)
    • Support multiple iterators in read-write transactions (#1286)

    New APIs

    • Badger.DB
      • NewManagedWriteBatch (#1310)
      • DropPrefix (#1381)
    • Badger.Option
      • WithDetectConflicts (#1344)
      • WithKeepBlockIndicesInCache (#1336)
      • WithKeepBlocksInCache (#1336)
    • Badger.WriteBatch
      • DeleteAt (#1310)
      • SetEntryAt (#1310)
      • Write (#1321)

    Changes to Default Options

    • DefaultOptions: Set KeepL0InMemory to false (#1345)
    • Increase default valueThreshold from 32B to 1KB (#1346)

    Deprecated

    • Badger.Option
      • WithEventLogging (#1203)

    Reverts

    This sections lists the changes which were reverted because of non-reproducible crashes.

    • Compress/Encrypt Blocks in the background (#1227)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3(Mar 27, 2020)

    Fixed

    • Add support for watching nil prefix in subscribe API (#1246)

    Performance

    • Compress/Encrypt Blocks in the background (#1227)
    • Disable cache by default (#1257)

    Features

    • Add BypassDirLock option (#1243)
    • Add separate cache for bloomfilters (#1260)

    New APIs

    • badger.DB
      • BfCacheMetrics (#1260)
      • DataCacheMetrics (#1260)
    • badger.Options
      • WithBypassLockGuard (#1243)
      • WithLoadBloomsOnOpen (#1260)
      • WithMaxBfCacheSize (#1260)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.3-rc1(Mar 26, 2020)

    Fixed

    • Add support for watching nil prefix in subscribe API (#1246)

    Performance

    • Compress/Encrypt Blocks in the background (#1227)
    • Disable cache by default (#1257)

    Features

    • Add BypassDirLock option (#1243)
    • Add separate cache for bloomfilters (#1260)

    New APIs

    • badger.DB
      • BfCacheMetrics (#1260)
      • DataCacheMetrics (#1260)
    • badger.Options
      • WithBypassLockGuard (#1243)
      • WithLoadBloomsOnOpen (#1260)
      • WithMaxBfCacheSize (#1260)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1(Mar 26, 2020)

    New APIs

    • Badger.DB
      • NewWriteBatchAt (#948)
    • Badger.Options
      • WithEventLogging (#1035)
      • WithVerifyValueChecksum (#1052)
      • WithBypassLockGuard (#1243)

    Features

    • Support checksum verification for values read from vlog (#1052)
    • Add EventLogging option (#1035)
    • Support WriteBatch API in managed mode (#948)
    • Add support for watching nil prefix in Subscribe API (#1246)

    Fixed

    • Initialize vlog before starting compactions in db.Open (#1226)
    • Fix int overflow for 32bit (#1216)
    • Remove the 'this entry should've caught' log from value.go (#1170)
    • Fix merge iterator duplicates issue (#1157)
    • Fix segmentation fault in vlog.Read (header.Decode) (#1150)
    • Fix VerifyValueChecksum checks (#1138)
    • Fix windows dataloss issue (#1134)
    • Fix request increment ref bug (#1121)
    • Limit manifest's change set size (#1119)
    • Fix deadlock in discard stats (#1070)
    • Acquire lock before unmapping vlog files (#1050)
    • Set move key's expiresAt for keys with TTL (#1006)
    • Fix deadlock when flushing discard stats. (#976)
    • Fix table.Smallest/Biggest and iterator Prefix bug (#997)
    • Fix boundaries on GC batch size (#987)
    • Lock log file before munmap (#949)
    • VlogSize to store correct directory name to expvar.Map (#956)
    • Fix transaction too big issue in restore (#957)
    • Fix race condition in updateDiscardStats (#973)
    • Cast results of len to uint32 to fix compilation in i386 arch. (#961)
    • Drop discard stats if we can't unmarshal it (#936)
    • Open all vlog files in RDWR mode (#923)
    • Fix race condition in flushDiscardStats function (#921)
    • Ensure rewrite in vlog is within transactional limits (#911)
    • Fix prefix bug in key iterator and allow all versions (#950)
    • Fix discard stats moved by GC bug (#929)

    Performance

    • Use fastRand instead of locked-rand in skiplist (#1173)
    • Fix checkOverlap in compaction (#1166)
    • Optimize createTable in stream_writer.go (#1132)
    • Add capacity to slice creation when capacity is known (#1103)
    • Introduce fast merge iterator (#1080)
    • Introduce StreamDone in Stream Writer (#1061)
    • Flush vlog buffer if it grows beyond threshold (#1067)
    • Binary search based table picker (#983)
    • Making the stream writer APIs goroutine-safe (#959)
    • Replace FarmHash with AESHash for Oracle conflicts (#952)
    • Change file picking strategy in compaction (#894)
    • Use trie for prefix matching (#851)
    • Fix busy-wait loop in Watermark (#920)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1-rc1(Mar 24, 2020)

    New APIs

    • Badger.DB
      • NewWriteBatchAt (#948)
    • Badger.Options
      • WithEventLogging (#1035)
      • WithVerifyValueChecksum (#1052)
      • WithBypassLockGuard (#1243)

    Features

    • Support checksum verification for values read from vlog (#1052)
    • Add EventLogging option (#1035)
    • Support WriteBatch API in managed mode (#948)
    • Add support for watching nil prefix in Subscribe API (#1246)

    Fixed

    • Initialize vlog before starting compactions in db.Open (#1226)
    • Fix int overflow for 32bit (#1216)
    • Remove the 'this entry should've caught' log from value.go (#1170)
    • Fix merge iterator duplicates issue (#1157)
    • Fix segmentation fault in vlog.Read (header.Decode) (#1150)
    • Fix VerifyValueChecksum checks (#1138)
    • Fix windows dataloss issue (#1134)
    • Fix request increment ref bug (#1121)
    • Limit manifest's change set size (#1119)
    • Fix deadlock in discard stats (#1070)
    • Acquire lock before unmapping vlog files (#1050)
    • Set move key's expiresAt for keys with TTL (#1006)
    • Fix deadlock when flushing discard stats. (#976)
    • Fix table.Smallest/Biggest and iterator Prefix bug (#997)
    • Fix boundaries on GC batch size (#987)
    • Lock log file before munmap (#949)
    • VlogSize to store correct directory name to expvar.Map (#956)
    • Fix transaction too big issue in restore (#957)
    • Fix race condition in updateDiscardStats (#973)
    • Cast results of len to uint32 to fix compilation in i386 arch. (#961)
    • Drop discard stats if we can't unmarshal it (#936)
    • Open all vlog files in RDWR mode (#923)
    • Fix race condition in flushDiscardStats function (#921)
    • Ensure rewrite in vlog is within transactional limits (#911)
    • Fix prefix bug in key iterator and allow all versions (#950)
    • Fix discard stats moved by GC bug (#929)

    Performance

    • Use fastRand instead of locked-rand in skiplist (#1173)
    • Fix checkOverlap in compaction (#1166)
    • Optimize createTable in stream_writer.go (#1132)
    • Add capacity to slice creation when capacity is known (#1103)
    • Introduce fast merge iterator
    • Introduce StreamDone in Stream Writer (#1061)
    • Flush vlog buffer if it grows beyond threshold (#1067)
    • Binary search based table picker (#983)
    • Making the stream writer APIs goroutine-safe (#959)
    • Replace FarmHash with AESHash for Oracle conflicts (#952)
    • Change file picking strategy in compaction (#894)
    • Use trie for prefix matching (#851)
    • Fix busy-wait loop in Watermark (#920)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2(Mar 2, 2020)

    Fixed

    • Cast sz to uint32 to fix compilation on 32 bit. (#1175)
    • Fix checkOverlap in compaction. (#1166)
    • Avoid sync in inmemory mode. (#1190)
    • Support disabling the cache completely. (#1185)
    • Add support for caching bloomfilters. (#1204)
    • Fix int overflow for 32bit. (#1216)
    • Remove the 'this entry should've caught' log from value.go. (#1170)
    • Rework concurrency semantics of valueLog.maxFid. (#1187)

    Performance

    • Use fastRand instead of locked-rand in skiplist. (#1173)
    • Improve write stalling on level 0 and 1. (#1186)
    • Disable compression and set ZSTD Compression Level to 1. (#1191)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.2-rc1(Feb 26, 2020)

    Fixed

    • Cast sz to uint32 to fix compilation on 32 bit. (#1175)
    • Fix checkOverlap in compaction. (#1166)
    • Avoid sync in inmemory mode. (#1190)
    • Support disabling the cache completely. (#1185)
    • Add support for caching bloomfilters. (#1204)
    • Fix int overflow for 32bit. (#1216)
    • Remove the 'this entry should've caught' log from value.go. (#1170)
    • Rework concurrency semantics of valueLog.maxFid. (#1187)

    Performance

    • Use fastRand instead of locked-rand in skiplist. (#1173)
    • Improve write stalling on level 0 and 1. (#1186)
    • Disable compression and set ZSTD Compression Level to 1. (#1191)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1(Jan 2, 2020)

    New APIs

    • badger.Options

      • WithInMemory (f5b6321)
      • WithZSTDCompressionLevel (3eb4e72)
    • Badger.TableInfo

      • EstimatedSz (f46f8ea)

    Features

    • Introduce in-memory mode in badger. (#1113)

    Fixed

    • Limit manifest's change set size. (#1119)
    • Cast idx to uint32 to fix compilation on i386. (#1118)
    • Fix request increment ref bug. (#1121)
    • Fix windows dataloss issue. (#1134)
    • Fix VerifyValueChecksum checks. (#1138)
    • Fix encryption in stream writer. (#1146)
    • Fix segmentation fault in vlog.Read. (header.Decode) (#1150)
    • Fix merge iterator duplicates issue. (#1157)

    Performance

    • Set level 15 as default compression level in Zstd. (#1111)
    • Optimize createTable in stream_writer.go. (#1132)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.1-rc1(Dec 23, 2019)

    New APIs

    • badger.Options

      • WithInMemory (f5b6321)
      • WithZSTDCompressionLevel (3eb4e72)
    • Badger.TableInfo

      • EstimatedSz (f46f8ea)

    Features

    • Introduce in-memory mode in badger. (#1113)

    Fixed

    • Limit manifest's change set size. (#1119)
    • Cast idx to uint32 to fix compilation on i386. (#1118)
    • Fix request increment ref bug. (#1121)
    • Fix windows dataloss issue. (#1134)
    • Fix VerifyValueChecksum checks. (#1138)
    • Fix encryption in stream writer. (#1146)
    • Fix segmentation fault in vlog.Read. (header.Decode) (#1150)
    • Fix merge iterator duplicates issue. (#1157)

    Performance

    • Set level 15 as default compression level in Zstd. (#1111)
    • Optimize createTable in stream_writer.go. (#1132)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.0(Nov 13, 2019)

    New features

    The main new features are:

    Others

    There are various bug fixes, optimizations, and new options. See the CHANGELOG for details.

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Jul 3, 2019)

    BadgerDB has changed a lot over the latest year so we released a new version with a brand new API.

    Read our CHANGELOG for more details on the exact changes, or the announcement post on our blog.

    New features

    The main new features are:

    • The Stream framework has been migrated from Dgraph into BadgerDB.
    • A new StreamWriter was added for concurrent writes for sorted streams.
    • You can now subscribe to changes in a DB with the DB.Subscribe method.
    • A new builder API has been added to reduce the boilerplate related to badger.Options.

    Breaking API changes

    The following changes might impact your code:

    • badger.ManagedDB has been deprecated and merged into badger.DB. You can still use badger.OpenManaged.
    • The badger.Options.DoNotCompact option has been removed.
    • badger.DefaultOptions and badger.LSMOnlyOptions are now functions that receive a directory path as a parameter.
    • All the methods on badger.Txn with name starting in SetWith have been deprecated and replaced with a builder API for type badger.Entry.
    • badger.Item.Value now receives a function that returns an error.
    • badger.Txn.Commit doesn't receive any params anymore.
    • badger.DB.Tables now accepts a boolean to decide whether keys should be counted.

    Others

    Many new commands and flags have been added to the badger CLI tool, read the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v2.0.0-rc1(Jun 20, 2019)

    BadgerDB has changed a lot over the latest year so we released a new version with a brand new API.

    BadgerDB v2.0.0 corresponds to the current status of master as June 20th, so if you're using latest you should not have any issues upgrading.

    Read our CHANGELOG for more details on the exact changes.

    New features

    The main new features are:

    • The Stream framework has been migrated from Dgraph into BadgerDB.
    • A new StreamWriter was added for concurrent writes for sorted streams.
    • You can now subscribe to changes in a DB with the DB.Subscribe method.
    • A new builder API has been added to reduce the boiler plate related to badger.Options.

    Breaking API changes

    The following changes might impact your code:

    • badger.ManagedDB has been deprecated and merged into badger.DB. You can still use badger.OpenManaged.
    • The badger.Options.DoNotCompact option has been removed.
    • badger.DefaultOptions and badger.LSMOnlyOptions are now functions that receive a directory path as a parameter.
    • All the methods on badger.Txn with name starting in SetWith have been deprecated and replaced with a builder API for type badger.Entry.
    • badger.Item.Value now receives a function that returns an error.
    • badger.Txn.Commit doesn't receive any params anymore.
    • badger.DB.Tables now accepts a boolean to decide whether keys should be counted.

    Others

    Many new commands and flags have been added to the badger CLI tool, read the CHANGELOG for more details.

    Source code(tar.gz)
    Source code(zip)
  • v1.5.5(Jun 20, 2019)

Owner
Hesham Erfan
Hesham Erfan
Movies Mobile Application to demonstrate Network calls in Flutter using with Chopper library

Movies Mobile Application to demonstrate Network calls in Flutter using with Chopper library. For API TMDB API console is used.

Varun Verma 1 Nov 7, 2022
In this Project I will demonstrate you the power of firebase remote config :)

power-of-firebase-remote-config In this Project I will demonstrate you the power of firebase remote config :) This project serves as a template. Purpo

null 4 Mar 7, 2022
Demonstrate how to easily program Android apps on Gitpod

Developing Android apps on Gitpod This project is intended to demonstrate how to easily program Android apps on Gitpod. It is based on this guide and

James Cullum (Pseudonym) 10 Dec 8, 2022
Fingerprint Local Auth App Flutter Advanced Face ID & Touch ID/Fingerprint Local Auth App

flutterlocalauth A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started

Pawan Kumar 71 Dec 17, 2022
Attempt to implement better scrolling for Flutter Web and Desktop. Includes keyboard, MButton and custom mouse wheel scrolling.

An attempt to implement better scrolling for Flutter Web and Desktop. Includes keyboard, MButton and custom mouse wheel scrolling. Getting started Exa

Adrian Flutur 46 Jan 3, 2023
Flutter Advanced: ARCore Tutorial | Sceneform | Exploring New Possibilities || Exploring New Possibilities

flutter_ar A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started if th

Pawan Kumar 58 Oct 13, 2022
Flutter Advanced: PDF Viewer Tutorial Android & IOS | From URL & Asset | From URL & Asset

flutterpdfview A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started i

Pawan Kumar 71 Jun 24, 2022
Flutter Advanced: Auto Create Models from JSON | Serializable | Serializable

flutterautomodel A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started

Pawan Kumar 20 Jan 28, 2022
Flutter Advanced: Background Fetch | Run code in the background Android & iOS | Run code in the background Android & iOS

flutterbackground A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you starte

Pawan Kumar 40 Dec 16, 2022
Flutter Advanced: TensorFlow Lite | Object Detection | YoloV2 | SSD Tutorial ||| SSD Tutorial

tflite_demo A new Flutter project. Getting Started This project is a starting point for a Flutter application. A few resources to get you started if t

Pawan Kumar 70 Nov 28, 2022
A flutter app shows some meals, its ingredients and cooking steps

meals_app A new Flutter project. This app shows some meals, its ingredients and cooking steps. Users can choose their meal by filtering their choice E

Md Kamruzzaman 1 Nov 20, 2021
LogiFresh helps you build a login easily with a friendly design, and very flexible for its construction

Login Fresh LogiFresh helps you build a login easily with a friendly design, and very flexible for its construction. Installation Follow the install i

Cristhian Hernández 9 Nov 24, 2022
Flutter Package to implement Feedback System in your @Flutter project. Taking Feedback from users made Easy!

Flutter App Feedback Taking feedback from the user made easy! Simply integrate flutter_app_feedback package into your Flutter project and you are read

Mihir Paldhikar 2 Nov 16, 2021
Drishti is a simple mobile application created as a project for YIP KDISC and Google Devs Solution Challenge

Drishti is a simple mobile application created as a project for YIP KDISC and Google Devs Solution Challenge. This app aims to make reporting and spreading the news of accidents and dangers easier and effective.

Friendly Neighbourhood Tekys 1 Nov 29, 2022
Simple application for tracking weight. See Google Play for more details about this app!

WeightTracker Simple application for tracking weight. See Google Play for more details about this app! Getting started To build the app you need to cr

MSzalek mobile apps 344 Dec 14, 2022
Shopping App developed using Flutter and Dart making use of Provider, Carousel Library, Google FireStore

Shopping App developed using Flutter and Dart making use of Provider, Carousel Library, Google FireStore. It's featuring standard modern shopping app UI

Atuoha Anthony 7 Jan 3, 2023
Flutter Google Sheets Example and Neumorphic Design

Flutter Google Sheets Example and Neumorphic Design Google Sheet is a web-based spreadsheet application created by Google. Google Sheet allows collabo

Mustafa Samancı 2 Jul 26, 2022
A News app that provides users with the outmost user experiance built with Google's flutter

A News app that provides users with the outmost user experiance built with Google's flutter

Miso Menze 3 Jun 30, 2022